Chapter 21 MySQL NDB Cluster 7.5 and NDB Cluster 7.6

Table of Contents

目录

21.1 NDB Cluster Overview
21.1.1 NDB Cluster Core Concepts
21.1.2 NDB Cluster Nodes, Node Groups, Replicas, and Partitions
21.1.3 NDB Cluster Hardware, Software, and Networking Requirements
21.1.4 What is New in NDB Cluster
21.1.5 NDB: Added, Deprecated, and Removed Options, Variables, and Parameters
21.1.6 MySQL Server Using InnoDB Compared with NDB Cluster
21.1.7 Known Limitations of NDB Cluster
21.2 NDB Cluster Installation
21.2.1 The NDB Cluster Auto-Installer (NDB 7.5)
21.2.2 The NDB Cluster Auto-Installer (NDB 7.6)
21.2.3 Installation of NDB Cluster on Linux
21.2.4 Installing NDB Cluster on Windows
21.2.5 Initial Configuration of NDB Cluster
21.2.6 Initial Startup of NDB Cluster
21.2.7 NDB Cluster Example with Tables and Data
21.2.8 Safe Shutdown and Restart of NDB Cluster
21.2.9 Upgrading and Downgrading NDB Cluster
21.3 Configuration of NDB Cluster
21.3.1 Quick Test Setup of NDB Cluster
21.3.2 Overview of NDB Cluster Configuration Parameters, Options, and Variables
21.3.3 NDB Cluster Configuration Files
21.3.4 Using High-Speed Interconnects with NDB Cluster
21.4 NDB Cluster Programs
21.4.1 ndbd — The NDB Cluster Data Node Daemon
21.4.2 ndbinfo_select_all — Select From ndbinfo Tables
21.4.3 ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)
21.4.4 ndb_mgmd — The NDB Cluster Management Server Daemon
21.4.5 ndb_mgm — The NDB Cluster Management Client
21.4.6 ndb_blob_tool — Check and Repair BLOB and TEXT columns of NDB Cluster Tables
21.4.7 ndb_config — Extract NDB Cluster Configuration Information
21.4.8 ndb_cpcd — Automate Testing for NDB Development
21.4.9 ndb_delete_all — Delete All Rows from an NDB Table
21.4.10 ndb_desc — Describe NDB Tables
21.4.11 ndb_drop_index — Drop Index from an NDB Table
21.4.12 ndb_drop_table — Drop an NDB Table
21.4.13 ndb_error_reporter — NDB Error-Reporting Utility
21.4.14 ndb_import — Import CSV Data Into NDB
21.4.15 ndb_index_stat — NDB Index Statistics Utility
21.4.16 ndb_move_data — NDB Data Copy Utility
21.4.17 ndb_perror — Obtain NDB Error Message Information
21.4.18 ndb_print_backup_file — Print NDB Backup File Contents
21.4.19 ndb_print_file — Print NDB Disk Data File Contents
21.4.20 ndb_print_frag_file — Print NDB Fragment List File Contents
21.4.21 ndb_print_schema_file — Print NDB Schema File Contents
21.4.22 ndb_print_sys_file — Print NDB System File Contents
21.4.23 ndb_redo_log_reader — Check and Print Content of Cluster Redo Log
21.4.24 ndb_restore — Restore an NDB Cluster Backup
21.4.25 ndb_select_all — Print Rows from an NDB Table
21.4.26 ndb_select_count — Print Row Counts for NDB Tables
21.4.27 ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster
21.4.28 ndb_show_tables — Display List of NDB Tables
21.4.29 ndb_size.pl — NDBCLUSTER Size Requirement Estimator
21.4.30 ndb_top — View CPU usage information for NDB threads
21.4.31 ndb_waiter — Wait for NDB Cluster to Reach a Given Status
21.4.32 Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs
21.5 Management of NDB Cluster
21.5.1 Summary of NDB Cluster Start Phases
21.5.2 Commands in the NDB Cluster Management Client
21.5.3 Online Backup of NDB Cluster
21.5.4 MySQL Server Usage for NDB Cluster
21.5.5 Performing a Rolling Restart of an NDB Cluster
21.5.6 Event Reports Generated in NDB Cluster
21.5.7 NDB Cluster Log Messages
21.5.8 NDB Cluster Single User Mode
21.5.9 Quick Reference: NDB Cluster SQL Statements
21.5.10 ndbinfo: The NDB Cluster Information Database
21.5.11 INFORMATION_SCHEMA Tables for NDB Cluster
21.5.12 NDB Cluster Security Issues
21.5.13 NDB Cluster Disk Data Tables
21.5.14 Online Operations with ALTER TABLE in NDB Cluster
21.5.15 Adding NDB Cluster Data Nodes Online
21.5.16 Distributed Privileges Using Shared Grant Tables
21.5.17 NDB API Statistics Counters and Variables
21.6 NDB Cluster Replication
21.6.1 NDB Cluster Replication: Abbreviations and Symbols
21.6.2 General Requirements for NDB Cluster Replication
21.6.3 Known Issues in NDB Cluster Replication
21.6.4 NDB Cluster Replication Schema and Tables
21.6.5 Preparing the NDB Cluster for Replication
21.6.6 Starting NDB Cluster Replication (Single Replication Channel)
21.6.7 Using Two Replication Channels for NDB Cluster Replication
21.6.8 Implementing Failover with NDB Cluster Replication
21.6.9 NDB Cluster Backups With NDB Cluster Replication
21.6.10 NDB Cluster Replication: Multi-Master and Circular Replication
21.6.11 NDB Cluster Replication Conflict Resolution
21.7 NDB Cluster Release Notes

MySQL NDB Cluster is a high-availability, high-redundancy version of MySQL adapted for the distributed computing environment. Recent NDB Cluster release series use version 7 of the NDB storage engine (also known as NDBCLUSTER) to enable running several computers with MySQL servers and other software in a cluster. NDB Cluster 7.6, now available as a General Availability (GA) release beginning with version 7.6.6, incorporates version 7.6 of the NDB storage engine. NDB Cluster 7.5, still available as a GA release, uses version 7.5 of NDB. Previous GA releases still available for use in production, NDB Cluster 7.3 and NDB Cluster 7.4, incorporate NDB versions 7.3 and 7.4, respectively. NDB Cluster 7.2, which uses version 7.2 of the NDB storage engine, is a previous GA release that is currently still maintained; 7.2 users are encouraged to upgrade to NDB 7.5 or NDB 7.6.

mysql ndb集群是一个适合分布式计算环境的高可用、高冗余的mysql版本。最新的ndb集群发布系列使用ndb存储引擎(也称为ndb cluster)的版本7,以便在一个集群中运行多台带有mysql服务器和其他软件的计算机。ndb cluster 7.6现在是从7.6.6版开始的通用(ga)版本,它包含了ndb存储引擎的7.6版。ndb cluster 7.5仍然作为ga版本提供,使用的是ndb的7.5版本。以前的ga版本仍然可以在生产中使用,ndb cluster 7.3和ndb cluster 7.4分别包含了ndb版本7.3和7.4。使用ndb存储引擎7.2版的ndb cluster 7.2是先前的ga版本,目前仍在维护;鼓励7.2用户升级到ndb 7.5或ndb 7.6。

Support for the NDB storage engine is not included in standard MySQL Server 5.7 binaries built by Oracle. Instead, users of NDB Cluster binaries from Oracle should upgrade to the most recent binary release of NDB Cluster for supported platforms—these include RPMs that should work with most Linux distributions. NDB Cluster users who build from source should use the sources provided for NDB Cluster. (Locations where the sources can be obtained are listed later in this section.)

Oracle构建的标准MySQLServer5.7二进制文件中不包括对NDB存储引擎的支持。相反,oracle的ndb集群二进制文件的用户应该升级到ndb集群的最新二进制版本,以用于受支持的平台,这些平台包括可以与大多数linux发行版一起使用的rpm。从源构建的ndb集群用户应使用为ndb集群提供的源。(本节后面将列出可以获取源的位置。)

Important

MySQL NDB Cluster does not support InnoDB cluster, which must be deployed using MySQL Server 5.7 with the InnoDB storage engine as well as additional applications that are not included in the NDB Cluster distribution. MySQL Server 5.7 binaries cannot be used with MySQL NDB Cluster. For more information about deploying and using InnoDB cluster, see Chapter 20, InnoDB Cluster. Section 21.1.6, “MySQL Server Using InnoDB Compared with NDB Cluster”, discusses differences between the NDB and InnoDB storage engines.

mysql ndb cluster不支持innodb cluster,必须使用带有innodb存储引擎的mysql server 5.7以及ndb cluster发行版中未包含的其他应用程序来部署innodb cluster。mysql server 5.7二进制文件不能与mysql ndb cluster一起使用。有关部署和使用innodb cluster的更多信息,请参阅第20章,innodb cluster。第21.1.6节“与ndb集群相比,使用innodb的mysql服务器”讨论了ndb和innodb存储引擎之间的区别。

This chapter contains information about NDB Cluster 7.5 releases through 5.7.28-ndb-7.5.16 and NDB Cluster 7.6 releases through 5.7.28-ndb-7.6.12, both of which are now General Availability (GA) releases supported in production. NDB Cluster 7.6 recommended for new deployments; for information about NDB Cluster 7.6, see Section 21.1.4.2, “What is New in NDB Cluster 7.6”. For similar information about NDB Cluster 7.5, see Section 21.1.4.1, “What is New in NDB Cluster 7.5”. NDB Cluster 7.4 and 7.3 are previous GA releases still supported in production; see MySQL NDB Cluster 7.3 and NDB Cluster 7.4. NDB Cluster 7.2 is a previous GA release series which is still maintained, although we recommend that new deployments for production use NDB Cluster 7.6. For more information about NDB Cluster 7.2, see MySQL NDB Cluster 7.2.

本章包含有关5.7.28-ndb-7.5.16和5.7.28-ndb-7.6.12的ndb cluster 7.5版本和ndb cluster 7.6版本的信息,这两个版本现在都是生产中支持的通用(ga)版本。建议在新部署中使用ndb cluster 7.6;有关ndb cluster 7.6的信息,请参阅第21.1.4.2节“ndb cluster 7.6的新增功能”。有关ndb cluster 7.5的类似信息,请参阅第21.1.4.1节“ndb cluster 7.5的新增功能”。ndb cluster 7.4和7.3是生产中仍然支持的早期ga版本;请参阅mysql ndb cluster 7.3和ndb cluster 7.4。ndb cluster 7.2是先前的ga发行版系列,目前仍在维护,不过我们建议新的生产部署使用ndb cluster 7.6。有关ndb cluster 7.2的更多信息,请参见mysql ndb cluster 7.2。

NDB Cluster 8.0 is now available as a Developer Preview release for evaluation and testing of new features in the NDBCLUSTER storage engine; for more information, see MySQL NDB Cluster 8.0.

ndb cluster 8.0现在作为开发人员预览版提供,用于评估和测试ndbcluster存储引擎中的新功能;有关更多信息,请参阅mysql ndb cluster 8.0。

Supported Platforms.  NDB Cluster is currently available and supported on a number of platforms. For exact levels of support available for on specific combinations of operating system versions, operating system distributions, and hardware platforms, please refer to https://www.mysql.com/support/supportedplatforms/cluster.html.

支持的平台。ndb集群目前在许多平台上可用并受支持。有关操作系统版本、操作系统发行版和硬件平台的特定组合的确切支持级别,请参阅https://www.mysql.com/support/supportedplatforms/cluster.html。

Availability.  NDB Cluster binary and source packages are available for supported platforms from https://dev.mysql.com/downloads/cluster/.

可利用性。ndb cluster二进制和源程序包可用于https://dev.mysql.com/downloads/cluster/支持的平台。

NDB Cluster release numbers.  NDB Cluster follows a somewhat different release pattern from the mainline MySQL Server 5.7 series of releases. In this Manual and other MySQL documentation, we identify these and later NDB Cluster releases employing a version number that begins with NDB. This version number is that of the NDBCLUSTER storage engine used in the release, and not of the MySQL server version on which the NDB Cluster release is based.

ndb集群版本号。ndb集群遵循与主线mysql server 5.7系列版本稍有不同的发布模式。在本手册和其他mysql文档中,我们使用以“ndb”开头的版本号来标识这些和更高版本的ndb集群版本。此版本号是版本中使用的ndb cluster存储引擎的版本号,而不是ndb群集版本所基于的mysql服务器版本号。

Version strings used in NDB Cluster software.  The version string displayed by NDB Cluster programs uses this format:

ndb群集软件中使用的版本字符串。ndb群集程序显示的版本字符串使用以下格式:

mysql-mysql_server_version-ndb-ndb_engine_version

mysql_server_version represents the version of the MySQL Server on which the NDB Cluster release is based. For all NDB Cluster 7.5 and NDB Cluster 7.6 releases, this is 5.7. ndb_engine_version is the version of the NDB storage engine used by this release of the NDB Cluster software. You can see this format used in the mysql client, as shown here:

mysql_server_version表示ndb集群版本所基于的mysql服务器的版本。对于所有的ndb cluster 7.5和ndb cluster 7.6版本,这是“5.7”。ndb_engine_version是此版本的ndb群集软件使用的ndb存储引擎的版本。您可以在mysql客户机中看到这种格式,如下所示:

shell> mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.28-ndb-7.5.16 Source distribution

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> SELECT VERSION()\G
*************************** 1. row ***************************
VERSION(): 5.7.28-ndb-7.5.16
1 row in set (0.00 sec)

This version string is also displayed in the output of the SHOW command in the ndb_mgm client:

此版本字符串也显示在ndb-mgm客户端的show命令输出中:

ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @10.0.10.6  (5.7.28-ndb-7.5.16, Nodegroup: 0, *)
id=2    @10.0.10.8  (5.7.28-ndb-7.5.16, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=3    @10.0.10.2  (5.7.28-ndb-7.5.16)

[mysqld(API)]   2 node(s)
id=4    @10.0.10.10  (5.7.28-ndb-7.5.16)
id=5 (not connected, accepting connect from any host)

The version string identifies the mainline MySQL version from which the NDB Cluster release was branched and the version of the NDB storage engine used. For example, the full version string for NDB 7.5.4 (the first NDB 7.5 GA release) was mysql-5.7.16-ndb-7.5.4. From this we can determine the following:

版本字符串标识从中分支ndb群集版本的主线mysql版本以及使用的ndb存储引擎的版本。例如,ndb 7.5.4(第一个ndb7.5ga版本)的完整版本字符串是mysql-5.7.16-ndb-7.5.4。由此我们可以确定:

New NDB Cluster releases are numbered according to updates in the NDB storage engine, and do not necessarily correspond in a one-to-one fashion with mainline MySQL Server releases. For example, NDB 7.5.4 (as previously noted) was based on MySQL 5.7.16, while NDB 7.5.3 was based on MySQL 5.7.13 (version string: mysql-5.7.13-ndb-7.5.3).

新的ndb集群版本根据ndb存储引擎中的更新进行编号,不一定与主线mysql服务器版本一一对应。例如,ndb 7.5.4(如前所述)基于mysql 5.7.16,而ndb7.5.3基于mysql 5.7.13(版本字符串:mysql-5.7.13-ndb-7.5.3)。

Compatibility with standard MySQL 5.7 releases.  While many standard MySQL schemas and applications can work using NDB Cluster, it is also true that unmodified applications and database schemas may be slightly incompatible or have suboptimal performance when run using NDB Cluster (see Section 21.1.7, “Known Limitations of NDB Cluster”). Most of these issues can be overcome, but this also means that you are very unlikely to be able to switch an existing application datastore—that currently uses, for example, MyISAM or InnoDB—to use the NDB storage engine without allowing for the possibility of changes in schemas, queries, and applications. In addition, the MySQL Server and NDB Cluster codebases diverge considerably, so that the standard mysqld cannot function as a drop-in replacement for the version of mysqld supplied with NDB Cluster.

与标准mysql 5.7版本兼容。虽然许多标准的mysql模式和应用程序可以使用ndb集群工作,但在使用ndb集群运行时,未修改的应用程序和数据库模式可能稍微不兼容,或者性能不太理想(请参阅第21.1.7节“ndb集群的已知限制”)。大多数这些问题可以被克服,但这也意味着,您不太可能能够切换当前使用的现有应用程序数据包,例如MyISAM或YNODB使用NDB存储引擎,而不允许在模式、查询和应用程序中发生改变。此外,mysql服务器和ndb集群的代码库有很大的差异,因此标准mysqld不能作为ndb集群提供的mysqld版本的替代品。

NDB Cluster development source trees.  NDB Cluster development trees can also be accessed from https://github.com/mysql/mysql-server.

ndb集群开发源树。也可以从https://github.com/mysql/mysql-server访问ndb集群开发树。

The NDB Cluster development sources maintained at https://github.com/mysql/mysql-server are licensed under the GPL. For information about obtaining MySQL sources using Git and building them yourself, see Section 2.9.5, “Installing MySQL Using a Development Source Tree”.

在https://github.com/mysql/mysql-server上维护的ndb集群开发源是根据gpl授权的。有关使用git获取mysql源代码并自行构建这些源代码的信息,请参阅2.9.5节“使用开发源代码树安装mysql”。

Note

As with MySQL Server 5.7, NDB Cluster 7.5 and NDB Cluster 7.6 releases are built using CMake.

与mysql server 5.7一样,ndb cluster 7.5和ndb cluster 7.6版本是使用cmake构建的。

NDB Cluster 7.5 and NDB Cluster 7.6 are available as General Availability (GA) releases; NDB 7.6 is recommended for new deployments. NDB Cluster 7.4 and NDB Cluster 7.3 are previous GA releases which are still supported in production. NDB 7.2 is a previous GA release series which is still maintained; it is no longer recommended for new deployments. For an overview of major features added in NDB 7.6, see Section 21.1.4.2, “What is New in NDB Cluster 7.6”. For similar information about NDB Cluster 7.5, see Section 21.1.4.1, “What is New in NDB Cluster 7.5”. For information about previous NDB Cluster releases, see MySQL NDB Cluster 7.3 and NDB Cluster 7.4, and MySQL NDB Cluster 7.2.

ndb cluster 7.5和ndb cluster 7.6作为一般可用性(ga)版本提供;建议在新部署中使用ndb 7.6。ndb cluster 7.4和ndb cluster 7.3是早期的ga版本,在生产中仍受支持。ndb 7.2是以前的ga发行版系列,现在仍在维护;不再推荐用于新的部署。有关在ndb 7.6中添加的主要功能的概述,请参见第21.1.4.2节“ndb cluster 7.6的新增功能”。有关ndb cluster 7.5的类似信息,请参阅第21.1.4.1节“ndb cluster 7.5的新增功能”。有关早期ndb群集版本的信息,请参见mysql ndb cluster 7.3和ndb cluster 7.4以及mysql ndb cluster 7.2。

The contents of this chapter are subject to revision as NDB Cluster continues to evolve. Additional information regarding NDB Cluster can be found on the MySQL website at http://www.mysql.com/products/cluster/.

随着ndb集群的不断发展,本章的内容可能会有所修订。有关ndb集群的其他信息,请访问mysql网站http://www.mysql.com/products/cluster/。

Additional Resources.  More information about NDB Cluster can be found in the following places:

额外资源。有关ndb集群的更多信息,请访问以下位置:

21.1 NDB Cluster Overview

NDB Cluster is a technology that enables clustering of in-memory databases in a shared-nothing system. The shared-nothing architecture enables the system to work with very inexpensive hardware, and with a minimum of specific requirements for hardware or software.

ndb集群是一种能够在不共享任何内容的系统中对内存中的数据库进行集群的技术。shared nothing体系结构使系统能够使用非常便宜的硬件,并且对硬件或软件的特定要求最低。

NDB Cluster is designed not to have any single point of failure. In a shared-nothing system, each component is expected to have its own memory and disk, and the use of shared storage mechanisms such as network shares, network file systems, and SANs is not recommended or supported.

ndb集群设计为不存在任何单点故障。在无共享系统中,每个组件都有自己的内存和磁盘,不建议或不支持使用共享存储机制,如网络共享、网络文件系统和SAN。

NDB Cluster integrates the standard MySQL server with an in-memory clustered storage engine called NDB (which stands for Network DataBase). In our documentation, the term NDB refers to the part of the setup that is specific to the storage engine, whereas MySQL NDB Cluster refers to the combination of one or more MySQL servers with the NDB storage engine.

ndb cluster将标准mysql服务器与一个名为ndb(代表“网络数据库”)的内存集群存储引擎集成在一起。在我们的文档中,术语ndb是指特定于存储引擎的设置部分,而“mysql ndb cluster”是指一个或多个mysql服务器与ndb存储引擎的组合。

An NDB Cluster consists of a set of computers, known as hosts, each running one or more processes. These processes, known as nodes, may include MySQL servers (for access to NDB data), data nodes (for storage of the data), one or more management servers, and possibly other specialized data access programs. The relationship of these components in an NDB Cluster is shown here:

ndb集群由一组称为主机的计算机组成,每个计算机运行一个或多个进程。这些进程称为节点,可以包括mysql服务器(用于访问ndb数据)、数据节点(用于存储数据)、一个或多个管理服务器,以及可能的其他专用数据访问程序。ndb集群中这些组件的关系如下所示:

Figure 21.1 NDB Cluster Components

图21.1 ndb集群组件

In this cluster, three MySQL servers (mysqld program) are SQL nodes that provide access to four data nodes (ndbd program) that store data. The SQL nodes and data nodes are under the control of an NDB management server (ndb_mgmd program). Various clients and APIs can interact with the SQL nodes - the mysql client, the MySQL C API, PHP, Connector/J, and Connector/NET. Custom clients can also be created using the NDB API to interact with the data nodes or the NDB management server. The NDB management client (ndb_mgm program) interacts with the NDB management server.

All these programs work together to form an NDB Cluster (see Section 21.4, “NDB Cluster Programs”. When data is stored by the NDB storage engine, the tables (and table data) are stored in the data nodes. Such tables are directly accessible from all other MySQL servers (SQL nodes) in the cluster. Thus, in a payroll application storing data in a cluster, if one application updates the salary of an employee, all other MySQL servers that query this data can see this change immediately.

所有这些程序一起工作,形成一个ndb集群(见第21.4节“ndb集群程序”)。当数据由ndb存储引擎存储时,表(和表数据)存储在数据节点中。这样的表可以从集群中的所有其他mysql服务器(sql节点)直接访问。因此,在集群中存储数据的payroll应用程序中,如果一个应用程序更新员工的工资,那么查询此数据的所有其他mysql服务器都可以立即看到此更改。

Although an NDB Cluster SQL node uses the mysqld server daemon, it differs in a number of critical respects from the mysqld binary supplied with the MySQL 5.7 distributions, and the two versions of mysqld are not interchangeable.

尽管ndb集群sql节点使用mysqld服务器守护进程,但它在许多关键方面与mysql 5.7发行版提供的mysqld二进制文件不同,并且mysqld的两个版本是不可互换的。

In addition, a MySQL server that is not connected to an NDB Cluster cannot use the NDB storage engine and cannot access any NDB Cluster data.

此外,未连接到ndb群集的mysql服务器不能使用ndb存储引擎,也不能访问任何ndb群集数据。

The data stored in the data nodes for NDB Cluster can be mirrored; the cluster can handle failures of individual data nodes with no other impact than that a small number of transactions are aborted due to losing the transaction state. Because transactional applications are expected to handle transaction failure, this should not be a source of problems.

存储在ndb集群数据节点中的数据可以被镜像;集群可以处理单个数据节点的故障,除了由于丢失事务状态而中止少量事务之外,没有其他影响。因为事务性应用程序需要处理事务失败,所以这不应该是问题的根源。

Individual nodes can be stopped and restarted, and can then rejoin the system (cluster). Rolling restarts (in which all nodes are restarted in turn) are used in making configuration changes and software upgrades (see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”). Rolling restarts are also used as part of the process of adding new data nodes online (see Section 21.5.15, “Adding NDB Cluster Data Nodes Online”). For more information about data nodes, how they are organized in an NDB Cluster, and how they handle and store NDB Cluster data, see Section 21.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”.

单个节点可以停止并重新启动,然后可以重新加入系统(集群)。滚动重启(所有节点依次重启)用于进行配置更改和软件升级(请参阅第21.5.5节“执行ndb集群的滚动重启”)。滚动重新启动还用作在线添加新数据节点的过程的一部分(请参阅21.5.15节,“在线添加ndb集群数据节点”)。有关数据节点、如何在ndb集群中组织它们以及如何处理和存储ndb集群数据的更多信息,请参阅21.1.2节“ndb集群节点、节点组、副本和分区”。

Backing up and restoring NDB Cluster databases can be done using the NDB-native functionality found in the NDB Cluster management client and the ndb_restore program included in the NDB Cluster distribution. For more information, see Section 21.5.3, “Online Backup of NDB Cluster”, and Section 21.4.24, “ndb_restore — Restore an NDB Cluster Backup”. You can also use the standard MySQL functionality provided for this purpose in mysqldump and the MySQL server. See Section 4.5.4, “mysqldump — A Database Backup Program”, for more information.

备份和还原ndb群集数据库可以使用ndb群集管理客户端中的ndb本机功能和ndb群集分发版中包含的ndb_restore程序来完成。有关详细信息,请参阅第21.5.3节“ndb群集的联机备份”和第21.4.24节“ndb群集还原-还原ndb群集备份”。您还可以在mysqldump和mysql服务器中使用为此目的提供的标准mysql功能。有关更多信息,请参阅4.5.4节,“mysqldump-数据库备份程序”。

NDB Cluster nodes can employ different transport mechanisms for inter-node communications; TCP/IP over standard 100 Mbps or faster Ethernet hardware is used in most real-world deployments.

ndb集群节点可以采用不同的传输机制进行节点间通信;在大多数实际部署中使用标准的100mbps或更快的以太网硬件上的tcp/ip。

21.1.1 NDB Cluster Core Concepts

NDBCLUSTER (also known as NDB) is an in-memory storage engine offering high-availability and data-persistence features.

ndbcluster(也称为ndb)是一个内存存储引擎,提供高可用性和数据持久性功能。

The NDBCLUSTER storage engine can be configured with a range of failover and load-balancing options, but it is easiest to start with the storage engine at the cluster level. NDB Cluster's NDB storage engine contains a complete set of data, dependent only on other data within the cluster itself.

ndbcluster存储引擎可以配置一系列故障转移和负载平衡选项,但最容易从群集级别的存储引擎开始。ndb集群的ndb存储引擎包含一组完整的数据,仅依赖于集群本身中的其他数据。

The Cluster portion of NDB Cluster is configured independently of the MySQL servers. In an NDB Cluster, each part of the cluster is considered to be a node.

ndb集群的“集群”部分是独立于mysql服务器配置的。在ndb集群中,集群的每个部分都被视为一个节点。

Note

In many contexts, the term node is used to indicate a computer, but when discussing NDB Cluster it means a process. It is possible to run multiple nodes on a single computer; for a computer on which one or more cluster nodes are being run we use the term cluster host.

在许多情况下,术语“节点”用于表示计算机,但在讨论ndb集群时,它意味着一个进程。可以在一台计算机上运行多个节点;对于运行一个或多个群集节点的计算机,我们使用术语“群集主机”。

There are three types of cluster nodes, and in a minimal NDB Cluster configuration, there will be at least three nodes, one of each of these types:

有三种类型的群集节点,在最小的ndb群集配置中,将至少有三个节点,每种类型都有一个:

  • Management node: The role of this type of node is to manage the other nodes within the NDB Cluster, performing such functions as providing configuration data, starting and stopping nodes, and running backups. Because this node type manages the configuration of the other nodes, a node of this type should be started first, before any other node. An MGM node is started with the command ndb_mgmd.

    管理节点:这类节点的作用是管理ndb集群内的其他节点,执行提供配置数据、启动和停止节点、运行备份等功能。由于此节点类型管理其他节点的配置,因此应先启动此类型的节点,然后再启动任何其他节点。使用命令ndb_mgmd启动mgm节点。

  • Data node: This type of node stores cluster data. There are as many data nodes as there are replicas, times the number of fragments (see Section 21.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”). For example, with two replicas, each having two fragments, you need four data nodes. One replica is sufficient for data storage, but provides no redundancy; therefore, it is recommended to have 2 (or more) replicas to provide redundancy, and thus high availability. A data node is started with the command ndbd (see Section 21.4.1, “ndbd — The NDB Cluster Data Node Daemon”) or ndbmtd (see Section 21.4.3, “ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)”).

    数据节点:这种类型的节点存储集群数据。数据节点的数量与副本的数量相同,是碎片数量的倍(请参阅第21.1.2节“ndb集群节点、节点组、副本和分区”)。例如,对于两个副本,每个副本有两个片段,您需要四个数据节点。一个副本足以存储数据,但不提供冗余;因此,建议使用两个(或更多)副本来提供冗余,从而提高可用性。使用命令ndbd(请参阅第21.4.1节“ndbd-ndb群集数据节点守护程序”)或ndbmtd(请参阅第21.4.3节“ndbmtd-ndb群集数据节点守护程序(多线程)”启动数据节点。

    NDB Cluster tables are normally stored completely in memory rather than on disk (this is why we refer to NDB Cluster as an in-memory database). However, some NDB Cluster data can be stored on disk; see Section 21.5.13, “NDB Cluster Disk Data Tables”, for more information.

    ndb集群表通常完全存储在内存中,而不是磁盘上(这就是我们将ndb集群称为内存中数据库的原因)。但是,一些ndb集群数据可以存储在磁盘上;有关详细信息,请参阅21.5.13节“ndb集群磁盘数据表”。

  • SQL node: This is a node that accesses the cluster data. In the case of NDB Cluster, an SQL node is a traditional MySQL server that uses the NDBCLUSTER storage engine. An SQL node is a mysqld process started with the --ndbcluster and --ndb-connectstring options, which are explained elsewhere in this chapter, possibly with additional MySQL server options as well.

    sql节点:这是一个访问集群数据的节点。对于ndb集群,sql节点是使用ndb cluster存储引擎的传统mysql服务器。SQL节点是一个mysqld进程,以--ndbcluster和--ndb connectstring选项启动,本章其他部分将对此进行说明,可能还包括其他mysql服务器选项。

    An SQL node is actually just a specialized type of API node, which designates any application which accesses NDB Cluster data. Another example of an API node is the ndb_restore utility that is used to restore a cluster backup. It is possible to write such applications using the NDB API. For basic information about the NDB API, see Getting Started with the NDB API.

    sql节点实际上只是api节点的一种特殊类型,它指定任何访问ndb集群数据的应用程序。api节点的另一个示例是用于还原群集备份的ndb_restore实用程序。可以使用ndb api编写这样的应用程序。有关ndb api的基本信息,请参阅ndb api入门。

Important

It is not realistic to expect to employ a three-node setup in a production environment. Such a configuration provides no redundancy; to benefit from NDB Cluster's high-availability features, you must use multiple data and SQL nodes. The use of multiple management nodes is also highly recommended.

在生产环境中使用三节点设置是不现实的。这样的配置不提供冗余;要利用ndb集群的高可用性特性,必须使用多个数据和sql节点。强烈建议使用多个管理节点。

For a brief introduction to the relationships between nodes, node groups, replicas, and partitions in NDB Cluster, see Section 21.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”.

有关ndb集群中节点、节点组、副本和分区之间关系的简要介绍,请参阅21.1.2节“ndb集群节点、节点组、副本和分区”。

Configuration of a cluster involves configuring each individual node in the cluster and setting up individual communication links between nodes. NDB Cluster is currently designed with the intention that data nodes are homogeneous in terms of processor power, memory space, and bandwidth. In addition, to provide a single point of configuration, all configuration data for the cluster as a whole is located in one configuration file.

集群的配置包括配置集群中的每个单独节点和设置节点之间的单独通信链路。ndb集群目前的设计意图是数据节点在处理器功率、内存空间和带宽方面是同质的。此外,为了提供单点配置,集群的所有配置数据作为一个整体都位于一个配置文件中。

The management server manages the cluster configuration file and the cluster log. Each node in the cluster retrieves the configuration data from the management server, and so requires a way to determine where the management server resides. When interesting events occur in the data nodes, the nodes transfer information about these events to the management server, which then writes the information to the cluster log.

管理服务器管理群集配置文件和群集日志。群集中的每个节点都从管理服务器检索配置数据,因此需要一种确定管理服务器所在位置的方法。当数据节点中发生感兴趣的事件时,节点将这些事件的信息传输到管理服务器,然后管理服务器将这些信息写入集群日志。

In addition, there can be any number of cluster client processes or applications. These include standard MySQL clients, NDB-specific API programs, and management clients. These are described in the next few paragraphs.

此外,可以有任意数量的群集客户端进程或应用程序。其中包括标准的mysql客户端、ndb特定的api程序和管理客户端。下面几段将介绍这些内容。

Standard MySQL clients.  NDB Cluster can be used with existing MySQL applications written in PHP, Perl, C, C++, Java, Python, Ruby, and so on. Such client applications send SQL statements to and receive responses from MySQL servers acting as NDB Cluster SQL nodes in much the same way that they interact with standalone MySQL servers.

标准MySQL客户端。NDB集群可以与PHP、Perl、C、C++、Java、Python、Ruby等编写的现有MySQL应用程序一起使用。这些客户机应用程序向充当ndb集群sql节点的mysql服务器发送sql语句并从中接收响应,其方式与它们与独立的mysql服务器交互的方式大致相同。

MySQL clients using an NDB Cluster as a data source can be modified to take advantage of the ability to connect with multiple MySQL servers to achieve load balancing and failover. For example, Java clients using Connector/J 5.0.6 and later can use jdbc:mysql:loadbalance:// URLs (improved in Connector/J 5.1.7) to achieve load balancing transparently; for more information about using Connector/J with NDB Cluster, see Using Connector/J with NDB Cluster.

可以修改使用ndb集群作为数据源的mysql客户机,以利用与多个mysql服务器连接的能力来实现负载平衡和故障转移。例如,使用connector/j 5.0.6及更高版本的java客户机可以使用jdbc:mysql:loadbalance://url(在connector/j5.1.7中进行了改进)来透明地实现负载平衡;有关在ndb集群中使用connector/j的更多信息,请参阅在ndb集群中使用connector/j。

NDB client programs.  Client programs can be written that access NDB Cluster data directly from the NDBCLUSTER storage engine, bypassing any MySQL Servers that may be connected to the cluster, using the NDB API, a high-level C++ API. Such applications may be useful for specialized purposes where an SQL interface to the data is not needed. For more information, see The NDB API.

ndb客户端程序。可以编写客户端程序来直接从NdBaseClient存储引擎访问NDB集群数据,绕过可能连接到集群的任何MySQL服务器,使用NDAPI API、高级C++ API。这种应用程序对于不需要数据的sql接口的特殊用途可能很有用。有关更多信息,请参阅ndb api。

NDB-specific Java applications can also be written for NDB Cluster using the NDB Cluster Connector for Java. This NDB Cluster Connector includes ClusterJ, a high-level database API similar to object-relational mapping persistence frameworks such as Hibernate and JPA that connect directly to NDBCLUSTER, and so does not require access to a MySQL Server. Support is also provided in NDB Cluster for ClusterJPA, an OpenJPA implementation for NDB Cluster that leverages the strengths of ClusterJ and JDBC; ID lookups and other fast operations are performed using ClusterJ (bypassing the MySQL Server), while more complex queries that can benefit from MySQL's query optimizer are sent through the MySQL Server, using JDBC. See Java and NDB Cluster, and The ClusterJ API and Data Object Model, for more information.

也可以使用ndb cluster connector for java为ndb集群编写特定于ndb的java应用程序。这个ndb集群连接器包括clusterj,这是一个高级数据库api,类似于hibernate和jpa等对象关系映射持久性框架,它们直接连接到ndb cluster,因此不需要访问mysql服务器。ndb cluster for clusterjpa是ndb cluster的openjpa实现,它利用了clusterj和jdbc的优点;id查找和其他快速操作是使用clusterj(绕过mysql服务器)执行的,虽然可以从mysql的查询优化器中受益的更复杂的查询是通过mysql服务器使用jdbc发送的。有关更多信息,请参阅java和ndb集群以及clusterj api和数据对象模型。

NDB Cluster also supports applications written in JavaScript using Node.js. The MySQL Connector for JavaScript includes adapters for direct access to the NDB storage engine and as well as for the MySQL Server. Applications using this Connector are typically event-driven and use a domain object model similar in many ways to that employed by ClusterJ. For more information, see MySQL NoSQL Connector for JavaScript.

ndb cluster还支持使用node.js用javascript编写的应用程序。mysql connector for javascript包括直接访问ndb存储引擎和mysql服务器的适配器。使用此连接器的应用程序通常是事件驱动的,并且在许多方面使用与clusterj使用的域对象模型相似的域对象模型。有关更多信息,请参阅mysql nosql connector for javascript。

The Memcache API for NDB Cluster, implemented as the loadable ndbmemcache storage engine for memcached version 1.6 and later, can be used to provide a persistent NDB Cluster data store, accessed using the memcache protocol.

用于ndb cluster的memcache api被实现为memcached 1.6版及更高版本的可加载ndbmemcache存储引擎,可用于提供使用memcache协议访问的持久ndb集群数据存储。

The standard memcached caching engine is included in the NDB Cluster 7.5 distribution. Each memcached server has direct access to data stored in NDB Cluster, but is also able to cache data locally and to serve (some) requests from this local cache.

标准memcached缓存引擎包含在ndb cluster 7.5发行版中。每个memcached服务器都可以直接访问ndb集群中存储的数据,但也可以在本地缓存数据,并为来自本地缓存的(某些)请求提供服务。

For more information, see ndbmemcache—Memcache API for NDB Cluster.

有关详细信息,请参阅ndb cluster的ndb memcache memcache api。

Management clients.  These clients connect to the management server and provide commands for starting and stopping nodes gracefully, starting and stopping message tracing (debug versions only), showing node versions and status, starting and stopping backups, and so on. An example of this type of program is the ndb_mgm management client supplied with NDB Cluster (see Section 21.4.5, “ndb_mgm — The NDB Cluster Management Client”). Such applications can be written using the MGM API, a C-language API that communicates directly with one or more NDB Cluster management servers. For more information, see The MGM API.

管理客户。这些客户机连接到管理服务器,并提供用于正常启动和停止节点、启动和停止消息跟踪(仅限调试版本)、显示节点版本和状态、启动和停止备份等的命令。这类程序的一个例子是ndb集群提供的ndb-mgm管理客户端(见第21.4.5节“ndb-mgm-ndb集群管理客户端”)。这样的应用程序可以使用mgm api编写,mgmapi是一个c语言api,它直接与一个或多个ndb集群管理服务器通信。有关更多信息,请参阅米高梅API。

Oracle also makes available MySQL Cluster Manager, which provides an advanced command-line interface simplifying many complex NDB Cluster management tasks, such restarting an NDB Cluster with a large number of nodes. The MySQL Cluster Manager client also supports commands for getting and setting the values of most node configuration parameters as well as mysqld server options and variables relating to NDB Cluster. See MySQL™ Cluster Manager 1.4.7 User Manual, for more information.

oracle还提供了mysql cluster manager,它提供了一个高级的命令行界面,简化了许多复杂的ndb集群管理任务,例如用大量节点重新启动ndb集群。mysql cluster manager客户端还支持用于获取和设置大多数节点配置参数值的命令,以及与ndb集群相关的mysqld服务器选项和变量。有关详细信息,请参阅MySQL™Cluster Manager 1.4.7用户手册。

Event logs.  NDB Cluster logs events by category (startup, shutdown, errors, checkpoints, and so on), priority, and severity. A complete listing of all reportable events may be found in Section 21.5.6, “Event Reports Generated in NDB Cluster”. Event logs are of the two types listed here:

事件日志。ndb集群按类别(启动、关闭、错误、检查点等)、优先级和严重性记录事件。所有可报告事件的完整列表可在第21.5.6节“在ndb集群中生成的事件报告”中找到。事件日志有以下两种类型:

  • Cluster log: Keeps a record of all desired reportable events for the cluster as a whole.

    集群日志:将集群的所有所需可报告事件记录为一个整体。

  • Node log: A separate log which is also kept for each individual node.

    节点日志:一个单独的日志,也为每个单独的节点保存。

Note

Under normal circumstances, it is necessary and sufficient to keep and examine only the cluster log. The node logs need be consulted only for application development and debugging purposes.

在正常情况下,只保留和检查集群日志是必要和足够的。仅在应用程序开发和调试时才需要查阅节点日志。

Checkpoint.  Generally speaking, when data is saved to disk, it is said that a checkpoint has been reached. More specific to NDB Cluster, a checkpoint is a point in time where all committed transactions are stored on disk. With regard to the NDB storage engine, there are two types of checkpoints which work together to ensure that a consistent view of the cluster's data is maintained. These are shown in the following list:

检查站。一般来说,当数据被保存到磁盘时,据说已经到达了一个检查点。更具体地说,对于ndb集群,检查点是所有提交的事务都存储在磁盘上的时间点。关于ndb存储引擎,有两种类型的检查点共同工作,以确保维护集群数据的一致视图。如下表所示:

  • Local Checkpoint (LCP): This is a checkpoint that is specific to a single node; however, LCPs take place for all nodes in the cluster more or less concurrently. An LCP usually occurs every few minutes; the precise interval varies, and depends upon the amount of data stored by the node, the level of cluster activity, and other factors.

    本地检查点(LCP):这是一个特定于单个节点的检查点;但是,对于集群中的所有节点,LCP或多或少同时发生。lcp通常每隔几分钟发生一次;精确的间隔是不同的,并且取决于节点存储的数据量、集群活动级别和其他因素。

    Prior to NDB 7.6.4, an LCP involved saving all of a node's data to disk. NDB 7.6.4 introduces support for partial LCPs, which can significantly improve recovery time under some conditions. See Section 21.1.4.2, “What is New in NDB Cluster 7.6”, for more information, as well as the descriptions of the EnablePartialLcp and RecoveryWork configuration parameters which enable partial LCPs and control the amount of storage they use.

    在ndb 7.6.4之前,lcp涉及将节点的所有数据保存到磁盘。ndb 7.6.4引入了对部分lcp的支持,在某些情况下可以显著提高恢复时间。有关更多信息,以及启用部分LCP并控制其使用的存储量的enablepartiallCP和recoveryWork配置参数的说明,请参阅第21.1.4.2节“NDB群集7.6的新增功能”。

  • Global Checkpoint (GCP): A GCP occurs every few seconds, when transactions for all nodes are synchronized and the redo-log is flushed to disk.

    全局检查点(gcp):当同步所有节点的事务并将重做日志刷新到磁盘时,每隔几秒钟就会发生一次gcp。

For more information about the files and directories created by local checkpoints and global checkpoints, see NDB Cluster Data Node File System Directory Files.

有关本地检查点和全局检查点创建的文件和目录的详细信息,请参阅ndb cluster data node file system directory files。

21.1.2 NDB Cluster Nodes, Node Groups, Replicas, and Partitions

This section discusses the manner in which NDB Cluster divides and duplicates data for storage.

本节讨论ndb集群分割和复制数据以供存储的方式。

A number of concepts central to an understanding of this topic are discussed in the next few paragraphs.

在接下来的几段中,我们将讨论一些对理解本主题至关重要的概念。

Data node.  An ndbd or ndbmtd process, which stores one or more replicas—that is, copies of the partitions (discussed later in this section) assigned to the node group of which the node is a member.

数据节点。一种ndbd或ndbmtd过程,它存储一个或多个副本,即分配给节点所属的节点组的分区的副本(本节稍后讨论)。

Each data node should be located on a separate computer. While it is also possible to host multiple data node processes on a single computer, such a configuration is not usually recommended.

每个数据节点应位于单独的计算机上。虽然也可以在一台计算机上承载多个数据节点进程,但通常不建议使用这种配置。

It is common for the terms node and data node to be used interchangeably when referring to an ndbd or ndbmtd process; where mentioned, management nodes (ndb_mgmd processes) and SQL nodes (mysqld processes) are specified as such in this discussion.

在提及ndbd或ndbmtd进程时,术语“节点”和“数据节点”通常被互换使用;在提到的地方,管理节点(ndb-mgmd进程)和sql节点(mysqld进程)在本讨论中被指定为这样。

Node group.  A node group consists of one or more nodes, and stores partitions, or sets of replicas (see next item).

节点组。节点组由一个或多个节点组成,并存储分区或副本集(请参阅下一项)。

The number of node groups in an NDB Cluster is not directly configurable; it is a function of the number of data nodes and of the number of replicas (NoOfReplicas configuration parameter), as shown here:

ndb集群中的节点组数量是不可直接配置的;它是数据节点数量和副本数量(noofreplicas配置参数)的函数,如下所示:

[# of node groups] = [# of data nodes] / NoOfReplicas

Thus, an NDB Cluster with 4 data nodes has 4 node groups if NoOfReplicas is set to 1 in the config.ini file, 2 node groups if NoOfReplicas is set to 2, and 1 node group if NoOfReplicas is set to 4. Replicas are discussed later in this section; for more information about NoOfReplicas, see Section 21.3.3.6, “Defining NDB Cluster Data Nodes”.

因此,如果在config.ini文件中将noofreplicas设置为1,则具有4个数据节点的ndb集群具有4个节点组;如果noofreplicas设置为2,则具有2个节点组;如果noofreplicas设置为4,则具有1个节点组。本节稍后将讨论复制副本;有关noofreplicas的更多信息,请参阅第21.3.3.6节“定义ndb集群数据节点”。

Note

All node groups in an NDB Cluster must have the same number of data nodes.

ndb群集中的所有节点组必须具有相同数量的数据节点。

You can add new node groups (and thus new data nodes) online, to a running NDB Cluster; see Section 21.5.15, “Adding NDB Cluster Data Nodes Online”, for more information.

您可以在线向正在运行的ndb集群添加新的节点组(从而添加新的数据节点);有关更多信息,请参阅21.5.15节“在线添加ndb集群数据节点”。

Partition.  This is a portion of the data stored by the cluster. Each node is responsible for keeping at least one copy of any partitions assigned to it (that is, at least one replica) available to the cluster.

隔墙。这是集群存储的数据的一部分。每个节点负责将分配给它的任何分区(即至少一个副本)的至少一个副本保留给集群。

The number of partitions used by default by NDB Cluster depends on the number of data nodes and the number of LDM threads in use by the data nodes, as shown here:

ndb集群默认使用的分区数量取决于数据节点的数量和数据节点使用的ldm线程的数量,如下所示:

[# of partitions] = [# of data nodes] * [# of LDM threads]

When using data nodes running ndbmtd, the number of LDM threads is controlled by the setting for MaxNoOfExecutionThreads. When using ndbd there is a single LDM thread, which means that there are as many cluster partitions as nodes participating in the cluster. This is also the case when using ndbmtd with MaxNoOfExecutionThreads set to 3 or less. (You should be aware that the number of LDM threads increases with the value of this parameter, but not in a strictly linear fashion, and that there are additional constraints on setting it; see the description of MaxNoOfExecutionThreads for more information.)

使用运行ndbmtd的数据节点时,ldm线程的数量由maxNoofExecutionThreads的设置控制。当使用ndbd时,只有一个ldm线程,这意味着与参与集群的节点一样多的集群分区。在将maxNoofExecutionThreads设置为3或更少时使用ndbmtd也是如此。(您应该知道,ldm线程的数量会随着此参数的值而增加,但不是以严格的线性方式增加,并且在设置它时会有其他限制;有关详细信息,请参阅maxNoofExecutionThreads的说明。)

NDB and user-defined partitioning.  NDB Cluster normally partitions NDBCLUSTER tables automatically. However, it is also possible to employ user-defined partitioning with NDBCLUSTER tables. This is subject to the following limitations:

ndb和用户定义的分区。ndb集群通常自动分区ndb cluster表。但是,也可以使用用户定义的分区和ndbcluster表。这受到以下限制:

  1. Only the KEY and LINEAR KEY partitioning schemes are supported in production with NDB tables.

    在具有ndb表的生产环境中,只支持密钥和线性密钥分区方案。

  2. The maximum number of partitions that may be defined explicitly for any NDB table is 8 * MaxNoOfExecutionThreads * [number of node groups], the number of node groups in an NDB Cluster being determined as discussed previously in this section. When using ndbd for data node processes, setting MaxNoOfExecutionThreads has no effect; in such a case, it can be treated as though it were equal to 1 for purposes of performing this calculation.

    可以为任何NDB表显式定义的分区的最大数目是8×Max NoFuffExordNoth** [节点组数],NDB群集中的节点组的数量如前面在本节中讨论的那样确定。当对数据节点进程使用ndbd时,设置maxNoofExecutionThreads没有效果;在这种情况下,为了执行此计算,可以将其视为等于1。

    See Section 21.4.3, “ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)”, for more information.

    有关更多信息,请参阅21.4.3节,“ndbmtd-ndb集群数据节点守护程序(多线程)”。

For more information relating to NDB Cluster and user-defined partitioning, see Section 21.1.7, “Known Limitations of NDB Cluster”, and Section 22.6.2, “Partitioning Limitations Relating to Storage Engines”.

有关ndb集群和用户定义分区的更多信息,请参阅第21.1.7节“ndb集群的已知限制”和第22.6.2节“与存储引擎相关的分区限制”。

Replica.  This is a copy of a cluster partition. Each node in a node group stores a replica. Also sometimes known as a partition replica. The number of replicas is equal to the number of nodes per node group.

复制品。这是群集分区的副本。节点组中的每个节点都存储一个副本。有时也称为分区副本。副本数等于每个节点组的节点数。

A replica belongs entirely to a single node; a node can (and usually does) store several replicas.

一个副本完全属于一个节点;一个节点可以(而且通常是)存储多个副本。

The following diagram illustrates an NDB Cluster with four data nodes running ndbd, arranged in two node groups of two nodes each; nodes 1 and 2 belong to node group 0, and nodes 3 and 4 belong to node group 1.

下图说明了一个ndb集群,其中有四个运行ndbd的数据节点,每个节点分为两个节点组;节点1和2属于节点组0,节点3和4属于节点组1。

Note

Only data nodes are shown here; although a working NDB Cluster requires an ndb_mgmd process for cluster management and at least one SQL node to access the data stored by the cluster, these have been omitted from the figure for clarity.

这里只显示了数据节点;虽然工作的ndb集群需要ndb-mgmd进程来进行集群管理,并且至少需要一个sql节点来访问集群存储的数据,但是为了清楚起见,图中省略了这些内容。

Figure 21.2 NDB Cluster with Two Node Groups

图21.2带有两个节点组的ndb集群

Content is described in the surrounding text.

The data stored by the cluster is divided into four partitions, numbered 0, 1, 2, and 3. Each partition is stored—in multiple copies—on the same node group. Partitions are stored on alternate node groups as follows:

集群存储的数据被分成四个分区,分别编号为0、1、2和3。每个分区存储在同一节点组的多个副本中。分区存储在备用节点组上,如下所示:

  • Partition 0 is stored on node group 0; a primary replica (primary copy) is stored on node 1, and a backup replica (backup copy of the partition) is stored on node 2.

    分区0存储在节点组0上;主副本(主副本)存储在节点1上,备份副本(分区的备份副本)存储在节点2上。

  • Partition 1 is stored on the other node group (node group 1); this partition's primary replica is on node 3, and its backup replica is on node 4.

    分区1存储在另一个节点组(节点组1)上;此分区的主副本位于节点3上,其备份副本位于节点4上。

  • Partition 2 is stored on node group 0. However, the placing of its two replicas is reversed from that of Partition 0; for Partition 2, the primary replica is stored on node 2, and the backup on node 1.

    分区2存储在节点组0上。但是,其两个副本的位置与分区0的位置相反;对于分区2,主副本存储在节点2上,备份存储在节点1上。

  • Partition 3 is stored on node group 1, and the placement of its two replicas are reversed from those of partition 1. That is, its primary replica is located on node 4, with the backup on node 3.

    分区3存储在节点组1上,其两个副本的位置与分区1的位置相反。也就是说,其主副本位于节点4上,而备份位于节点3上。

What this means regarding the continued operation of an NDB Cluster is this: so long as each node group participating in the cluster has at least one node operating, the cluster has a complete copy of all data and remains viable. This is illustrated in the next diagram.

对于ndb集群的持续操作来说,这意味着:只要参与集群的每个节点组至少有一个节点在运行,集群就拥有所有数据的完整副本并保持可用。这在下一个图表中有说明。

Figure 21.3 Nodes Required for a 2x2 NDB Cluster

图21.3 2x2 ndb集群所需的节点

Content is described in the surrounding text.

In this example, the cluster consists of two node groups each consisting of two data nodes. Each data node is running an instance of ndbd. Any combination of at least one node from node group 0 and at least one node from node group 1 is sufficient to keep the cluster alive. However, if both nodes from a single node group fail, the combination consisting of the remaining two nodes in the other node group is not sufficient. In this situation, the cluster has lost an entire partition and so can no longer provide access to a complete set of all NDB Cluster data.

在本例中,集群由两个节点组组成,每个节点组由两个数据节点组成。每个数据节点都运行一个ndbd实例。节点组0中的至少一个节点和节点组1中的至少一个节点的任何组合都足以保持群集“活动”。但是,如果一个节点组中的两个节点都失败,则由另一个节点组中的其余两个节点组成的组合是不够的。在这种情况下,集群丢失了整个分区,因此无法再提供对所有ndb集群数据的完整集的访问。

In NDB 7.5.4 and later, the maximum number of node groups supported for a single NDB Cluster instance is 48 (Bug#80845, Bug #22996305).

在NDB 7.5.4和以后,单个NDB集群实例支持的节点组的最大数量为48(Bugα80845,Bugα22996305)。

21.1.3 NDB Cluster Hardware, Software, and Networking Requirements

One of the strengths of NDB Cluster is that it can be run on commodity hardware and has no unusual requirements in this regard, other than for large amounts of RAM, due to the fact that all live data storage is done in memory. (It is possible to reduce this requirement using Disk Data tables—see Section 21.5.13, “NDB Cluster Disk Data Tables”, for more information about these.) Naturally, multiple and faster CPUs can enhance performance. Memory requirements for other NDB Cluster processes are relatively small.

ndb集群的一个优点是它可以在普通硬件上运行,并且在这方面没有特殊的要求,除了大量的ram,因为所有的实时数据存储都是在内存中完成的。(使用磁盘数据表可以减少这一要求,请参阅21.5.13节“ndb群集磁盘数据表”,了解更多有关这些的信息。)自然,多个更快的cpu可以提高性能。其他ndb集群进程的内存需求相对较小。

The software requirements for NDB Cluster are also modest. Host operating systems do not require any unusual modules, services, applications, or configuration to support NDB Cluster. For supported operating systems, a standard installation should be sufficient. The MySQL software requirements are simple: all that is needed is a production release of NDB Cluster. It is not strictly necessary to compile MySQL yourself merely to be able to use NDB Cluster. We assume that you are using the binaries appropriate to your platform, available from the NDB Cluster software downloads page at https://dev.mysql.com/downloads/cluster/.

ndb集群的软件需求也不高。主机操作系统不需要任何异常的模块、服务、应用程序或配置来支持ndb集群。对于受支持的操作系统,标准安装应该足够了。mysql软件的需求很简单:只需要一个ndb集群的生产版本。仅仅为了能够使用ndb集群,并不需要自己编译mysql。我们假设您正在使用适合您的平台的二进制文件,可以从https://dev.mysql.com/downloads/cluster/上的ndb cluster software downloads页面获得。

For communication between nodes, NDB Cluster supports TCP/IP networking in any standard topology, and the minimum expected for each host is a standard 100 Mbps Ethernet card, plus a switch, hub, or router to provide network connectivity for the cluster as a whole. We strongly recommend that an NDB Cluster be run on its own subnet which is not shared with machines not forming part of the cluster for the following reasons:

对于节点之间的通信,ndb集群在任何标准拓扑中都支持tcp/ip网络,每个主机的最低要求是标准的100mbps以太网卡,外加一个交换机、集线器或路由器,以便为整个集群提供网络连接。强烈建议在自己的子网上运行ndb群集,该子网不与不构成群集一部分的计算机共享,原因如下:

  • Security.  Communications between NDB Cluster nodes are not encrypted or shielded in any way. The only means of protecting transmissions within an NDB Cluster is to run your NDB Cluster on a protected network. If you intend to use NDB Cluster for Web applications, the cluster should definitely reside behind your firewall and not in your network's De-Militarized Zone (DMZ) or elsewhere.

    安防ndb集群节点之间的通信没有以任何方式加密或屏蔽。保护ndb集群内传输的唯一方法是在受保护的网络上运行ndb集群。如果您打算将ndb集群用于web应用程序,那么集群肯定应该位于防火墙后面,而不是位于网络的非军事化区域(dmz)或其他地方。

    See Section 21.5.12.1, “NDB Cluster Security and Networking Issues”, for more information.

    有关更多信息,请参阅21.5.12.1节,“ndb群集安全和网络问题”。

  • Efficiency.  Setting up an NDB Cluster on a private or protected network enables the cluster to make exclusive use of bandwidth between cluster hosts. Using a separate switch for your NDB Cluster not only helps protect against unauthorized access to NDB Cluster data, it also ensures that NDB Cluster nodes are shielded from interference caused by transmissions between other computers on the network. For enhanced reliability, you can use dual switches and dual cards to remove the network as a single point of failure; many device drivers support failover for such communication links.

    效率。在专用或受保护的网络上设置ndb群集使群集能够独占使用群集主机之间的带宽。为您的ndb群集使用单独的交换机不仅有助于防止未经授权访问ndb群集数据,还可以确保ndb群集节点不受网络上其他计算机之间传输造成的干扰。为了增强可靠性,可以使用双交换机和双卡将网络作为单点故障删除;许多设备驱动程序支持此类通信链路的故障转移。

Network communication and latency.  NDB Cluster requires communication between data nodes and API nodes (including SQL nodes), as well as between data nodes and other data nodes, to execute queries and updates. Communication latency between these processes can directly affect the observed performance and latency of user queries. In addition, to maintain consistency and service despite the silent failure of nodes, NDB Cluster uses heartbeating and timeout mechanisms which treat an extended loss of communication from a node as node failure. This can lead to reduced redundancy. Recall that, to maintain data consistency, an NDB Cluster shuts down when the last node in a node group fails. Thus, to avoid increasing the risk of a forced shutdown, breaks in communication between nodes should be avoided wherever possible.

网络通信和延迟。ndb集群需要数据节点和api节点(包括sql节点)之间以及数据节点和其他数据节点之间的通信来执行查询和更新。这些进程之间的通信延迟会直接影响观察到的用户查询的性能和延迟。此外,为了在节点发生静默故障的情况下保持一致性和服务,ndb集群使用心跳和超时机制,将节点通信的扩展丢失视为节点故障。这可以减少冗余。回想一下,为了保持数据一致性,当节点组中的最后一个节点出现故障时,ndb集群将关闭。因此,为了避免增加被迫关机的风险,应尽可能避免节点间通信中断。

The failure of a data or API node results in the abort of all uncommitted transactions involving the failed node. Data node recovery requires synchronization of the failed node's data from a surviving data node, and re-establishment of disk-based redo and checkpoint logs, before the data node returns to service. This recovery can take some time, during which the Cluster operates with reduced redundancy.

数据或api节点的失败将导致中止涉及失败节点的所有未提交事务。数据节点恢复要求在数据节点恢复服务之前,将故障节点的数据与幸存数据节点同步,并重新建立基于磁盘的重做和检查点日志。此恢复可能需要一段时间,在此期间,群集以较低的冗余度运行。

Heartbeating relies on timely generation of heartbeat signals by all nodes. This may not be possible if the node is overloaded, has insufficient machine CPU due to sharing with other programs, or is experiencing delays due to swapping. If heartbeat generation is sufficiently delayed, other nodes treat the node that is slow to respond as failed.

心跳依赖于所有节点及时生成心跳信号。如果节点过载、由于与其他程序共享而导致计算机CPU不足或由于交换而遇到延迟,则可能无法执行此操作。如果心跳生成足够延迟,则其他节点将响应缓慢的节点视为失败。

This treatment of a slow node as a failed one may or may not be desirable in some circumstances, depending on the impact of the node's slowed operation on the rest of the cluster. When setting timeout values such as HeartbeatIntervalDbDb and HeartbeatIntervalDbApi for NDB Cluster, care must be taken care to achieve quick detection, failover, and return to service, while avoiding potentially expensive false positives.

在某些情况下,这种将慢节点作为失败节点的处理方式可能是可取的,也可能不是可取的,这取决于节点的慢操作对集群其余部分的影响。在为ndb集群设置诸如heartbeatintervaldbdb和heartbeatintervaldbapi之类的超时值时,必须注意实现快速检测、故障转移和恢复服务,同时避免潜在的昂贵误报。

Where communication latencies between data nodes are expected to be higher than would be expected in a LAN environment (on the order of 100 µs), timeout parameters must be increased to ensure that any allowed periods of latency periods are well within configured timeouts. Increasing timeouts in this way has a corresponding effect on the worst-case time to detect failure and therefore time to service recovery.

如果数据节点之间的通信延迟预计高于局域网环境中的预期值(大约为100微秒),则必须增加超时参数,以确保任何允许的延迟周期都在配置的超时范围内。以这种方式增加超时会相应地影响最坏情况下检测故障的时间,从而影响服务恢复的时间。

LAN environments can typically be configured with stable low latency, and such that they can provide redundancy with fast failover. Individual link failures can be recovered from with minimal and controlled latency visible at the TCP level (where NDB Cluster normally operates). WAN environments may offer a range of latencies, as well as redundancy with slower failover times. Individual link failures may require route changes to propagate before end-to-end connectivity is restored. At the TCP level this can appear as large latencies on individual channels. The worst-case observed TCP latency in these scenarios is related to the worst-case time for the IP layer to reroute around the failures.

局域网环境通常可以配置为稳定的低延迟,这样它们就可以通过快速故障切换提供冗余。可以在TCP级别(NDB群集正常运行的地方)以最小且可控制的延迟从中恢复单个链路故障。广域网环境可能会提供一系列的延迟,以及故障转移时间较慢的冗余。个别链路故障可能需要在恢复端到端连接之前传播路由更改。在TCP级别,这可能会在单个通道上显示为较大的延迟。在这些场景中观察到的最坏情况下的TCP延迟与IP层围绕故障重新路由的最坏情况时间有关。

21.1.4 What is New in NDB Cluster

The following sections describe changes in the implementation of NDB Cluster in MySQL NDB Cluster 7.5 and NDB Cluster 7.6 as compared to earlier release series. NDB Cluster 7.5 is available as a General Availability release beginning with NDB 7.5.4. NDB Cluster 7.6 is also available as a General Availability release begninning with NDB 7.6.6, and is recommended for new deployments. For information about additions and other changes in NDB Cluster 7.5, see Section 21.1.4.1, “What is New in NDB Cluster 7.5”; for information about new features and other changes in NDB Cluster 7.6, see Section 21.1.4.2, “What is New in NDB Cluster 7.6”.

以下部分描述了与早期版本系列相比,mysql ndb cluster 7.5和ndb cluster 7.6中ndb cluster实现的变化。ndb cluster 7.5是从ndb 7.5.4开始的通用可用性版本。ndb cluster 7.6也是从ndb 7.6.6开始的通用可用性版本,建议用于新的部署。有关ndb cluster 7.5中的新增和其他更改的信息,请参阅第21.1.4.1节“ndb cluster 7.5中的新增功能”;有关ndb cluster 7.6中的新增功能和其他更改的信息,请参阅第21.1.4.2节“ndb cluster 7.6中的新增功能和其他更改”。

NDB Cluster 7.4 is a recent General Availability release still supported for new deployments. NDB Cluster 7.3, is a previous GA release, still supported in production for existing deployments. NDB Cluster 7.2 is also a previous GA release series which is still supported in production. We recommend that new deployments use NDB Cluster 7.6, which is the latest GA release. For information about features added in NDB 7.4, see What is New in NDB Cluster 7.4; What is New in NDB Cluster 7.3 contains information about features added in NDB 7.3. For information about NDB Cluster 7.2 and previous NDB Cluster releases, see What is New in MySQL NDB Cluster 7.2.

ndb cluster 7.4是最新的通用可用性版本,仍支持新的部署。NDB集群7.3是以前的GA版本,在现有部署中仍然支持生产。ndb cluster 7.2也是以前的ga发行版系列,在生产中仍受支持。我们建议新的部署使用ndb cluster 7.6,这是最新的ga版本。有关在ndb 7.4中添加的功能的信息,请参阅ndb cluster 7.4中的新增功能;ndb cluster 7.3中的新增功能包含有关在ndb 7.3中添加的功能的信息。有关ndb cluster 7.2和以前的ndb cluster版本的信息,请参阅mysql ndb cluster 7.2中的新增功能。

NDB Cluster 8.0 is now available as a Developer Preview release for evaluation and testing of new features in the NDBCLUSTER storage engine; for more information, see MySQL NDB Cluster 8.0.

ndb cluster 8.0现在作为开发人员预览版提供,用于评估和测试ndbcluster存储引擎中的新功能;有关更多信息,请参阅mysql ndb cluster 8.0。

21.1.4.1 What is New in NDB Cluster 7.5

Major changes and new features in NDB Cluster 7.5 which are likely to be of interest are shown in the following list:

以下列表显示了可能感兴趣的ndb集群7.5中的主要更改和新功能:

  • ndbinfo Enhancements.  A number of changes are made in the ndbinfo database, chief of which is that it now provides detailed information about NDB Cluster node configuration parameters.

    ndbinfo增强。在ndbinfo数据库中进行了许多更改,主要是现在它提供了有关ndb集群节点配置参数的详细信息。

    The config_params table has been made read-only, and has been enhanced with additional columns providing information about each configuration parameter, including the parameter's type, default value, maximum and minimum values (where applicable), a brief description of the parameter, and whether the parameter is required. This table also provides each parameter with a unique param_number.

    CopyToPARAMS表已被只读,并且已经增强了附加列,这些列提供有关每个配置参数的信息,包括参数的类型、默认值、最大值和最小值(在适用的情况下)、参数的简要描述以及参数是否需要。此表还为每个参数提供唯一的参数编号。

    A row in the config_values table shows the current value of a given parameter on the node having a specified ID. The parameter is identified by the value of the config_param column, which maps to the config_params table's param_number.

    配置值表中的一行显示具有指定ID的节点上给定参数的当前值。该参数由配置参数列的值标识,该列映射到配置参数表的参数号。

    Using this relationship you can write a join on these two tables to obtain the default, maximum, minimum, and current values for one or more NDB Cluster configuration parameters by name. An example SQL statement using such a join is shown here:

    使用此关系,您可以在这两个表上写入一个连接,以获得一个或多个NDB集群配置参数的默认值、最大值、最小值和当前值。下面显示了使用这种联接的示例SQL语句:

    SELECT  p.param_name AS Name,
            v.node_id AS Node,
            p.param_type AS Type,
            p.param_default AS 'Default',
            p.param_min AS Minimum,
            p.param_max AS Maximum,
            CASE p.param_mandatory WHEN 1 THEN 'Y' ELSE 'N' END AS 'Required',
            v.config_value AS Current
    FROM    config_params p
    JOIN    config_values v
    ON      p.param_number = v.config_param
    WHERE   p. param_name IN ('NodeId', 'HostName','DataMemory', 'IndexMemory');
    

    For more information about these changes, see Section 21.5.10.8, “The ndbinfo config_params Table”. See Section 21.5.10.9, “The ndbinfo config_values Table”, for further information and examples.

    有关这些更改的更多信息,请参阅21.5.10.8节,“ndbinfo config_params表”。有关更多信息和示例,请参阅第21.5.10.9节“ndbinfo配置值表”。

    In addition, the ndbinfo database no longer depends on the MyISAM storage engine. All ndbinfo tables and views now use NDB (shown as NDBINFO).

    此外,ndbinfo数据库不再依赖myisam存储引擎。所有ndbinfo表和视图现在都使用ndb(显示为ndbinfo)。

    Several new ndbinfo tables were introduced in NDB 7.5.4. These tables are listed here, with brief descriptions:

    在ndb 7.5.4中引入了几个新的ndbinfo表。下面列出这些表格,并简要说明:

    • dict_obj_info provides the names and types of database objects in NDB, as well as information about parent obejcts where applicable

      dict_obj_info提供ndb中数据库对象的名称和类型,以及有关父obejcts的信息(如果适用)

    • table_distribution_status provides NDB table distribution status information

      table_distribution_status提供ndb table distribution status信息

    • table_fragments provides information about the distribution of NDB table fragments

      table_fragments提供有关ndb table fragments分布的信息

    • table_info provides information about logging, checkpointing, storage, and other options in force for each NDB table

      table_info提供有关每个ndb表的日志记录、检查点、存储和其他有效选项的信息

    • table_replicas provides information about fragment replicas

      表_replicas提供了有关片段副本的信息

    See the descriptions of the individual tables for more information.

    有关详细信息,请参见各个表的说明。

  • Default row and column format changes.  Starting with NDB 7.5.1, the default value for both the ROW_FORMAT option and the COLUMN_FORMAT option for CREATE TABLE can be set to DYNAMIC rather than FIXED, using a new MySQL server variable ndb_default_column_format is added as part of this change; set this to FIXED or DYNAMIC (or start mysqld with the equivalent option --ndb-default-column-format=FIXED) to force this value to be used for COLUMN_FORMAT and ROW_FORMAT. Prior to NDB 7.5.4, the default for this variable was DYNAMIC; in this and later versions, the default is FIXED, which provides backwards compatibility with prior releases (Bug #24487363).

    默认行和列格式更改。从ndb 7.5.1开始,row_format选项和column_format选项的默认值都可以设置为dynamic而不是fixed,使用新的mysql服务器变量ndb_default_column_format作为此更改的一部分被添加;将其设置为fixed或dynamic(或使用等效选项启动mysqld--ndb default column format=fixed),以强制将此值用于column_format和row_format。在ndb 7.5.4之前,这个变量的默认值是动态的;在这个和更高版本中,默认值是固定的,它提供了与以前版本的向后兼容性(bug 24487363)。

    The row format and column format used by existing table columns are unaffected by this change. New columns added to such tables use the new defaults for these (possibly overridden by ndb_default_column_format), and existing columns are changed to use these as well, provided that the ALTER TABLE statement performing this operation specifies ALGORITHM=COPY.

    由现有表列使用的行格式和列格式不受此更改的影响。添加到这些表中的新列使用这些新的默认值(可能被NdByDebug ToLnnx格式重写),并且现有的列也被更改为使用这些,只要执行此操作的ALTALTABLE语句指定“算法=复制”。

    Note

    A copying ALTER TABLE cannot be done implicitly if mysqld is run with --ndb-allow-copying-alter-table=FALSE.

    如果mysqld使用--ndb allow copying alter table=false运行,则无法隐式地复制alter table。

  • ndb_binlog_index no longer dependent on MyISAM.  As of NDB 7.5.2, the ndb_binlog_index table employed in NDB Cluster Replication now uses the InnoDB storage engine instead of MyISAM. When upgrading, you can run mysql_upgrade with --force --upgrade-system-tables to cause it to execute ALTER TABLE ... ENGINE=INNODB on this table. Use of MyISAM for this table remains supported for backward compatibility.

    ndb_binlog_索引不再依赖myisam。从ndb 7.5.2开始,ndb集群复制中使用的ndb-binlog-u索引表现在使用的是innodb存储引擎,而不是myisam。升级时,可以使用--force--upgrade系统表运行mysql_upgrade,使其执行alter table…engine=innodb在这个表上。为了向后兼容,仍然支持对此表使用myisam。

    A benefit of this change is that it makes it possible to depend on transactional behavior and lock-free reads for this table, which can help alleviate concurrency issues during purge operations and log rotation, and improve the availability of this table.

    此更改的一个好处是,它使依赖此表的事务行为和无锁读取成为可能,这有助于缓解清除操作和日志循环期间的并发问题,并提高此表的可用性。

  • ALTER TABLE changes.  NDB Cluster formerly supported an alternative syntax for online ALTER TABLE. This is no longer supported in NDB Cluster 7.5, which makes exclusive use of ALGORITHM = DEFAULT|COPY|INPLACE for table DDL, as in the standard MySQL Server.

    更改表更改。ndb集群以前支持在线alter table的替代语法。ndb cluster 7.5不再支持这一点,因为它只对表ddl使用algorithm=default copy inplace,就像在标准mysql服务器中一样。

    Another change affecting the use of this statement is that ALTER TABLE ... ALGORITHM=INPLACE RENAME may now contain DDL operations in addition to the renaming.

    影响此语句使用的另一个更改是alter table…algorithm=inplace rename现在除了重命名之外,还可以包含ddl操作。

  • ExecuteOnComputer parameter deprecated.  The ExecuteOnComputer configuration parameter for management nodes, data nodes, and API nodes has been deprecated and is now subject to removal in a future release of NDB Cluster. You should use the equivalent HostName parameter for all three types of nodes.

    ExecuteOnComputer参数已弃用。管理节点、数据节点和api节点的executeoncomputer配置参数已被弃用,现在可以在ndb集群的未来版本中删除。您应该对所有三种类型的节点使用等效的hostname参数。

  • records-per-key optimization.  The NDB handler now uses the records-per-key interface for index statistics implemented for the optimizer in MySQL 5.7.5. Some of the benefits from this change include those listed here:

    按密钥优化记录。ndb处理程序现在使用records per key接口来统计mysql 5.7.5中优化器实现的索引。此更改带来的一些好处包括:

    • The optimizer now chooses better execution plans in many cases where a less optimal join index or table join order would previously have been chosen

      优化器现在在许多情况下选择了更好的执行计划,在这些情况下,先前可能会选择一个不太理想的连接索引或表连接顺序

    • Row estimates shown by EXPLAIN are more accurate

      explain显示的行估计更准确

    • Cardinality estimates shown by SHOW INDEX are improved

      改进了show索引显示的基数估计

  • Connection pool node IDs.  NDB 7.5.0 adds the mysqld --ndb-cluster-connection-pool-nodeids option, which allows a set of node IDs to be set for the connection pool. This setting overrides --ndb-nodeid, which means that it also overrides both the --ndb-connectstring option and the NDB_CONNECTSTRING environment variable.

    连接池节点ID。ndb 7.5.0添加了mysqld--ndb cluster connection pool node ids选项,该选项允许为连接池设置一组节点id。此设置重写--ndb nodeid,这意味着它还重写--ndb connectstring选项和ndb_connectstring环境变量。

    Note

    You can set the size for the connection pool using the --ndb-cluster-connection-pool option for mysqld.

    可以使用mysqld的--ndb cluster connection pool选项设置连接池的大小。

  • create_old_temporals removed.  The create_old_temporals system variable was deprecated in NDB Cluster 7.4, and has now been removed.

    创建已删除的旧临时项。在ndb cluster 7.4中,create_old_temporals系统变量已被弃用,现在已被删除。

  • ndb_mgm Client PROMPT command.  NDB Cluster 7.5 adds a new command for setting the client's command-line prompt. The following example illustrates the use of the PROMPT command:

    ndb_mgm client prompt命令。ndb cluster 7.5添加了一个新命令,用于设置客户端的命令行提示符。下面的示例演示了prompt命令的使用:

    ndb_mgm> PROMPT mgm#1:
    mgm#1: SHOW
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     4 node(s)
    id=5    @10.100.1.1  (mysql-5.7.28-ndb-7.5.16, Nodegroup: 0, *)
    id=6    @10.100.1.3  (mysql-5.7.28-ndb-7.5.16, Nodegroup: 0)
    id=7    @10.100.1.9  (mysql-5.7.28-ndb-7.5.16, Nodegroup: 1)
    id=8    @10.100.1.11  (mysql-5.7.28-ndb-7.5.16, Nodegroup: 1)
    
    [ndb_mgmd(MGM)] 1 node(s)
    id=50   @10.100.1.8  (mysql-5.7.28-ndb-7.5.16)
    
    [mysqld(API)]   2 node(s)
    id=100  @10.100.1.8  (5.7.28-ndb-7.5.16)
    id=101  @10.100.1.10  (5.7.28-ndb-7.5.16)
    
    mgm#1: PROMPT
    ndb_mgm> EXIT
    jon@valhaj:/usr/local/mysql/bin>
    

    For additional information and examples, see Section 21.5.2, “Commands in the NDB Cluster Management Client”.

    有关更多信息和示例,请参阅21.5.2节,“ndb群集管理客户端中的命令”。

  • Increased FIXED column storage per fragment.  NDB Cluster 7.5 and later supports a maximum of 128 TB per fragment of data in FIXED columns. In NDB Cluster 7.4 and earlier, this was 16 GB per fragment.

    增加了每个片段的固定列存储。NDB集群7.5和以后支持在固定列中每个数据片段最多128个TB。在ndb集群7.4及更早版本中,每个片段的容量为16gb。

  • Deprecated parameters removed.  The following NDB Cluster data node configuration parameters were deprecated in previous releases of NDB Cluster, and were removed in NDB 7.5.0:

    已删除不推荐使用的参数。以下ndb群集数据节点配置参数在ndb群集的早期版本中已弃用,并在ndb 7.5.0中删除:

    • Id: deprecated in NDB 7.1.9; replaced by NodeId.

      ID:在NDB 7.1.9中不推荐使用;替换为nodeid。

    • NoOfDiskPagesToDiskDuringRestartTUP, NoOfDiskPagesToDiskDuringRestartACC: both deprecated, had no effect; replaced in MySQL 5.1.6 by DiskCheckpointSpeedInRestart, which itself was later deprecated (in NDB 7.4.1) and is now also removed.

      noofdiskpagestodiskduringrestartup,noofdiskpagestodiskduringrestartac:两者都已弃用,没有效果;在mysql 5.1.6中被diskcheckpointspeedinrestart替换,后者本身后来被弃用(在ndb 7.4.1中),现在也被删除。

    • NoOfDiskPagesToDiskAfterRestartACC, NoOfDiskPagesToDiskAfterRestartTUP: both deprecated, and had no effect; replaced in MySQL 5.1.6 by DiskCheckpointSpeed, which itself was later deprecated (in NDB 7.4.1) and is now also removed.

      noofdiskpagestodiskafterrestartacc,noofdiskpagestodiskafterrestartup:两者都已弃用,并且没有效果;在mysql 5.1.6中被diskcheckpointspeed替换,diskcheckpointspeed本身后来被弃用(在ndb 7.4.1中),现在也被删除。

    • ReservedSendBufferMemory: deprecated in NDB 7.2.5; no longer had any effect.

      ReservedSendBufferMemory:在NDB 7.2.5中已弃用;不再有任何效果。

    • MaxNoOfIndexes: archaic (pre-MySQL 4.1), had no effect; long since replaced by MaxNoOfOrderedIndexes or MaxNoOfUniqueHashIndexes.

      maxnoofindexes:古老的(MySQL4.1之前),没有效果;很久以前就被maxnooforderedindexes或maxnoofuniquehashindexes取代了。

    • Discless: archaic (pre-MySQL 4.1) synonym for and long since replaced by Diskless.

      无盘:古老的(MySQL4.1之前的)同义词,很久以前就被无盘取代了。

    The archaic and unused (and for this reason also previously undocumented) ByteOrder computer configuration parameter was also removed in NDB 7.5.0.

    在ndb 7.5.0中还删除了过时的和未使用的字节顺序计算机配置参数(由于这个原因以前也没有记录)。

    The parameters just described are not supported in NDB 7.5. Attempting to use any of these parameters in an NDB Cluster configuration file now results in an error.

    ndb 7.5不支持刚才描述的参数。尝试在ndb群集配置文件中使用这些参数会导致错误。

  • DBTC scan enhancements.  Scans have been improved by reducing the number of signals used for communication between the DBTC and DBDIH kernel blocks in NDB, enabling higher scalability of data nodes when used for scan operations by decreasing the use of CPU resources for scan operations, in some cases by an estimated five percent.

    DBTC扫描增强功能。通过减少用于ndb中dbtc和dbdih内核块之间通信的信号的数量,扫描得到了改进,当用于扫描操作时,通过减少用于扫描操作的cpu资源的使用(在某些情况下估计为5%),使数据节点具有更高的可伸缩性。

    Also as result of these changes response times should be greatly improved, which could help prevent issues with overload of the main threads. In addition, scans made in the BACKUP kernel block have also been improved and made more efficient than in previous releases.

    此外,由于这些更改,响应时间应该大大提高,这有助于防止主线程过载的问题。此外,在备份内核块中进行的扫描也得到了改进,比以前的版本更加高效。

  • JSON column support.  NDB 7.5.2 and later supports the JSON column type for NDB tables and the JSON functions found in the MySQL Server, subject to the limitation that an NDB table can have at most 3 JSON columns.

    JSON列支持。ndb 7.5.2及更高版本支持ndb表的json列类型和mysql服务器中的json函数,但ndb表最多只能有3个json列。

  • Read from any replica; specify number of hashmap partition fragments.  Previously, all reads were directed towards the primary replica except for simple reads. (A simple read is a read that locks the row while reading it.) Beginning with NDB 7.5.2, it is possible to enable reads from any replica. This is disabled by default but can be enabled for a given SQL node using the ndb_read_backup system variable added in this release.

    从任何副本读取;指定哈希映射分区片段的数量。以前,除了简单读取之外,所有读取都定向到主副本。(简单读取是在读取行时锁定行的读取。)从ndb 7.5.2开始,可以启用对任何副本的读取。这在默认情况下是禁用的,但可以使用此版本中添加的ndb_read_backup系统变量为给定的sql节点启用。

    Previously, it was possible to define tables with only one type of partition mapping, with one primary partition on each LDM in each node, but in NDB 7.5.2 it becomes possible to be more flexible about the assignment of partitions by setting a partition balance (fragment count type). Possible balance schemes are one per node, one per node group, one per LDM per node, and one per LDM per node group.

    以前,可以定义只有一种类型的分区映射表,每个节点上的每个LDM都有一个主分区,但是在NDB 7.5.2中,通过设置分区平衡(片段计数类型),可以更灵活地分配分区。可能的平衡方案是每个节点一个,每个节点组一个,每个ldm每个节点一个,每个ldm每个节点组一个。

    This setting can be controlled for individual tables by means of a PARTITION_BALANCE option (renamed from FRAGMENT_COUNT_TYPE in NDB 7.5.4) embedded in NDB_TABLE comments in CREATE TABLE or ALTER TABLE statements. Settings for table-level READ_BACKUP are also supported using this syntax. For more information and examples, see Section 13.1.18.10, “Setting NDB_TABLE Options”.

    此设置可以通过分区的Type平衡选项(从NDB 7.5.4中的FractMyCurtType类型重命名)在单个表中被控制,在NETBYTABLE表中嵌入在创建表或ALTALTABLE语句中。使用此语法还支持表级读取备份的设置。有关更多信息和示例,请参阅第13.1.18.10节“设置ndb_表选项”。

    In NDB API applications, a table's partition balance can also be get and set using methods supplied for this purpose; see Table::getPartitionBalance(), and Table::setPartitionBalance(), as well as Object::PartitionBalance, for more information about these.

    在ndb api应用程序中,还可以使用为此目的提供的方法来获取和设置表的分区平衡;有关这些方法的详细信息,请参阅table::get partition balance()、table::setpartitionbalance()和object::partitionbalance。

    As part of this work, NDB 7.5.2 also introduces the ndb_data_node_neighbour system variable. This is intended for use, in transaction hinting, to provide a nearby data node to this SQL node.

    作为这项工作的一部分,ndb 7.5.2还引入了ndb_data_node_neighbor系统变量。这是为了在事务提示中向这个sql节点提供一个“附近”的数据节点。

    In addition, when restoring table schemas, ndb_restore --restore-meta now uses the target cluster's default partitioning, rather than using the same number of partitions as the original cluster from which the backup was taken. See Section 21.4.24.1.2, “Restoring to More Nodes Than the Original”, for more information and an example.

    此外,在还原表模式时,ndb_restore--restore meta现在使用目标集群的默认分区,而不是使用与从中进行备份的原始集群相同数量的分区。有关更多信息和示例,请参阅第21.4.24.1.2节“还原到比原始节点更多的节点”。

    NDB 7.5.3 adds a further enhancement to READ_BACKUP: In this and later versions, it is possible to set READ_BACKUP for a given table online as part of ALTER TABLE ... ALGORITHM=INPLACE ....

    ndb 7.5.3增加了对read_backup的进一步增强:在这个和更高版本中,可以将给定表的read_backup设置为alter table的一部分……算法=就地….

  • ThreadConfig improvements.  A number of enhancements and feature additions are implemented in NDB 7.5.2 for the ThreadConfig multithreaded data node (ndbmtd) configuration parameter, including support for an increased number of platforms. These changes are described in the next few paragraphs.

    threadconfig改进。在ndb 7.5.2中,为threadconfig多线程数据节点(ndbmtd)配置参数实现了许多增强和特性添加,包括对更多平台的支持。下面几段将介绍这些变化。

    Non-exclusive CPU locking is now supported on FreeBSD and Windows, using cpubind and cpuset. Exclusive CPU locking is now supported on Solaris (only) using the cpubind_exclusive and cpuset_exclusive parameters which are introduced in this release.

    使用cpubind和cpuset,freebsd和windows现在支持非独占cpu锁定。现在,在solaris上(仅)使用本版本中引入的cpubind_exclusive和cpuset_exclusive参数支持独占cpu锁定。

    Thread prioritzation is now available, controlled by the new thread_prio parameter. thread_prio is supported on Linux, FreeBSD, Windows, and Solaris, and varies somewhat by platform. For more information, see the description of ThreadConfig.

    线程优先级现在可用,由新的thread prio参数控制。thread_prio在linux、freebsd、windows和solaris上受支持,并且随平台的不同而有所不同。有关详细信息,请参阅threadconfig的说明。

    The realtime parameter is now supported on Windows platforms.

    windows平台现在支持realtime参数。

  • Partitions larger than 16 GB.  Due to an improvement in the hash index implementation used by NDB Cluster data nodes, partitions of NDB tables may now contain more than 16 GB of data for fixed columns, and the maximum partition size for fixed columns is now raised to 128 TB. The previous limitation was due to the fact that the DBACC block in the NDB kernel used only 32-bit references to the fixed-size part of a row in the DBTUP block, although 45-bit references to this data are used in DBTUP itself and elsewhere in the kernel outside DBACC; all such references in to the data handled in the DBACC block now use 45 bits instead.

    大于16GB的分区。由于NDB集群数据节点使用的散列索引实现的改进,NDB表的分区现在可以包含超过16 GB的固定列数据,并且固定列的最大分区大小现在提高到128 TB。先前的限制是由于ndb内核中的dbacc块仅使用32位对dbtup块中行的固定大小部分的引用,尽管dbtup本身和dbacc之外的内核中的其他地方使用了对该数据的45位引用;现在对dbacc块中处理的数据的所有此类引用都使用45位。相反。

  • Print SQL statements from ndb_restore.  NDB 7.5.4 adds the --print-sql-log option for the ndb_restore utility provided with the NDB Cluster distribution. This option enables SQL logging to stdout. Important: Every table to be restored using this option must have an explicitly defined primary key.

    从ndb_restore打印sql语句。ndb 7.5.4为ndb集群分发提供的ndb_restore实用程序添加了--print sql log选项。此选项允许将SQL日志记录到stdout。重要提示:使用此选项还原的每个表都必须具有显式定义的主键。

    See Section 21.4.24, “ndb_restore — Restore an NDB Cluster Backup”, for more information.

    有关详细信息,请参阅21.4.24节“ndb_restore-restore an ndb cluster backup”。

  • Organization of RPM packages.  Beginning with NDB 7.5.4, the naming and organization of RPM packages provided for NDB Cluster align more closely with those released for the MySQL server. The names of all NDB Cluster RPMs are now prefixed with mysql-cluster. Data nodes are now installed using the data-node package; management nodes are now installed from the management-server package; and SQL nodes require the server and common packages. MySQL and NDB client programs, including the mysql client and the ndb_mgm management client, are now included in the client RPM.

    RPM包的组织。从ndb 7.5.4开始,为ndb集群提供的rpm包的命名和组织与为mysql服务器发布的rpm包更加紧密地结合在一起。所有ndb集群rpm的名称现在都以mysql cluster作为前缀。数据节点现在使用数据节点包安装;管理节点现在从管理服务器包安装;sql节点需要服务器和公共包。mysql和ndb客户机程序,包括mysql客户机和ndb-mgm管理客户机,现在包含在客户机rpm中。

    For a detailed listing of NDB Cluster RPMs and other information, see Section 21.2.3.2, “Installing NDB Cluster from RPM”.

    有关ndb集群rpm的详细列表和其他信息,请参阅第21.2.3.2节“从rpm安装ndb集群”。

  • ndbinfo processes and config_nodes tables.  NDB 7.5.7 adds two tables to the ndbinfo information database to provide information about cluster nodes; these tables are listed here:

    ndbinfo进程和配置节点表。ndb 7.5.7将两个表添加到ndbinfo信息数据库中,以提供有关群集节点的信息;下面列出了这些表:

    • config_nodes: This table provides the node ID, process type, and host name for each node listed in an NDB cluster's configuration file.

      配置节点:此表提供ndb群集配置文件中列出的每个节点的节点ID、进程类型和主机名。

    • The processes shows information about nodes currently connected to the cluster; this information includes the process name and system process ID; for each data node and SQL node, it also shows the process ID of the node's angel process. In addition, the table shows a service address for each connected node; this address can be set in NDB API applications using the Ndb_cluster_connection::set_service_uri() method, which is also added in NDB 7.5.7.

      进程显示当前连接到集群的节点的信息;该信息包括进程名和系统进程ID;对于每个数据节点和SQL节点,它还显示节点的天使进程的进程ID。此外,表中还显示了每个连接节点的服务地址;可以使用ndb_cluster_connection::set_service_uri()方法在ndb api应用程序中设置此地址,ndb 7.5.7中也添加了此方法。

  • System name.  The system name of an NDB cluster can be used to identify a specific cluster. Beginning with NDB 7.5.7, the MySQL Server shows this name as the value of the Ndb_system_name status variable; NDB API applications can use the Ndb_cluster_connection::get_system_name() method which is added in the same release.

    系统名称。ndb集群的系统名称可用于标识特定集群。从ndb 7.5.7开始,mysql服务器将此名称显示为ndb_system_name状态变量的值;ndb api应用程序可以使用在同一版本中添加的ndb_cluster_connection::get_system_name()方法。

    A system name based on the time the management server was started is generated automatically; you can override this value by adding a [system] section to the cluster's configuration file and setting the Name parameter to a value of your choice in this section, prior to starting the management server.

    系统将根据管理服务器启动的时间自动生成系统名称;在启动管理服务器之前,可以通过向群集的配置文件中添加一个[system]节并将name参数设置为您在此节中选择的值来覆盖此值。

  • ndb_restore options.  Beginning with NDB 7.5.13, the --nodeid and --backupid options are both required when invoking ndb_restore.

    ndb_还原选项。从ndb 7.5.13开始,--nodeid和--backupid选项在调用ndb_restore时都是必需的。

NDB Cluster 7.5 is also supported by MySQL Cluster Manager, which provides an advanced command-line interface that can simplify many complex NDB Cluster management tasks. See MySQL™ Cluster Manager 1.4.7 User Manual, for more information.

mysql cluster manager也支持ndb cluster 7.5,它提供了一个高级的命令行界面,可以简化许多复杂的ndb集群管理任务。有关详细信息,请参阅MySQL™Cluster Manager 1.4.7用户手册。

21.1.4.2 What is New in NDB Cluster 7.6

New features and other important changes in NDB Cluster 7.6 which are likely to be of interest are shown in the following list:

以下列表显示了可能感兴趣的ndb集群7.6中的新功能和其他重要更改:

  • New Disk Data table file format.  A new file format was introduced in NDB 7.6.2 for NDB Disk Data tables, which makes it possible for each Disk Data table to be uniquely identified without reusing any table IDs. The format was improved further in NDB 7.6.4. This should help resolve issues with page and extent handling that were visible to the user as problems with rapid creating and dropping of Disk Data tables, and for which the old format did not provide a ready means to fix.

    新的磁盘数据表文件格式。在ndb 7.6.2中为ndb磁盘数据表引入了一种新的文件格式,使得每个磁盘数据表都可以唯一标识,而不必重用任何表id。在ndb 7.6.4中进一步改进了格式。这将有助于解决在快速创建和删除磁盘数据表时对用户可见的页面和数据块处理问题,而对于这些问题,旧格式没有提供现成的解决方法。

    The new format is now used whenever new undo log file groups and tablespace data files are created. Files relating to existing Disk Data tables continue to use the old format until their tablespaces and undo log file groups are re-created.

    新格式现在在创建新的撤消日志文件组和表空间数据文件时使用。与现有磁盘数据表相关的文件继续使用旧格式,直到重新创建表空间和撤销日志文件组为止。

    Important

    The old and new formats are not compatible; different data files or undo log files that are used by the same Disk Data table or tablespace cannot use a mix of formats.

    新旧格式不兼容;同一磁盘数据表或表空间使用的不同数据文件或撤消日志文件不能混合使用多种格式。

    To avoid problems relating to the changes in format, you should re-create any existing tablespaces and undo log file groups when upgrading to NDB 7.6.2 or NDB 7.6.4. You can do this by performing an initial restart of each data node (that is, using the --initial option) as part of the upgrade process. You can expect this step to be made mandatory as part of upgrading from NDB 7.5 or an earlier release series to NDB 7.6 or later.

    为了避免与格式更改有关的问题,在升级到NDB 7.62或NDB 7.64.4时,应重新创建任何现有表空间和撤销日志文件组。作为升级过程的一部分,您可以执行每个数据节点的初始重新启动(即使用--initial选项)。作为从ndb 7.5或更早版本系列升级到ndb7.6或更高版本系列的一部分,您可以预期此步骤是必需的。

    If you are using Disk Data tables, a downgrade from any NDB 7.6 release—without regard to release status—to any NDB 7.5 or earlier release requires that you restart all data nodes with --initial as part of the downgrade process. This is because NDB 7.5 and earlier release series are not able to read the new Disk Data file format.

    如果您使用的是磁盘数据表,从任何ndb 7.6版本(不考虑版本状态)降级到任何ndb7.5或更早版本需要重新启动所有数据节点,并在降级过程中使用--initial。这是因为ndb 7.5和早期版本系列无法读取新的磁盘数据文件格式。

    For more information, see Section 21.2.9, “Upgrading and Downgrading NDB Cluster”.

    有关更多信息,请参阅第21.2.9节“升级和降级ndb集群”。

  • Data memory pooling and dynamic index memory.  Memory required for indexes on NDB table columns is now allocated dynamically from that allocated for DataMemory. For this reason, the IndexMemory configuration parameter is now deprecated, and subject to removal in a future release series.

    数据内存池和动态索引内存。ndb表列上的索引所需的内存现在是从为datamemory分配的内存中动态分配的。因此,indexMemory配置参数现在已被弃用,并将在以后的发行版系列中删除。

    Important

    Starting with NDB 7.6.2, if IndexMemory is set in the config.ini file, the management server issues the warning IndexMemory is deprecated, use Number bytes on each ndbd(DB) node allocated for storing indexes instead on startup, and any memory assigned to this parameter is automatically added to DataMemory.

    从ndb 7.6.2开始,如果在config.ini文件中设置了indexmemory,管理服务器将发出警告indexmemory已弃用,在每个ndbd(db)节点上使用分配用于存储索引的字节数,而不是在启动时,分配给此参数的任何内存都将自动添加到datamemory。

    In addition, the default value for DataMemory has been increased to 98M; the default for IndexMemory has been decreased to 0.

    此外,datamemory的默认值增加到98m;indexmemory的默认值减少到0。

    The pooling together of index memory with data memory simplifies the configuration of NDB; a further benefit of these changes is that scaling up by increasing the number of LDM threads is no longer limited by having set an insufficiently large value for IndexMemory.This is because index memory is no longer a static quantity which is allocated only once (when the cluster starts), but can now be allocated and deallocated as required. Previously, it was sometimes the case that increasing the number of LDM threads could lead to index memory exhaustion while large amounts of DataMemory remained available.

    索引内存和数据内存的合用简化了ndb的配置;这些更改的另一个好处是,通过增加ldm线程的数量来扩展ndb不再受到为index memory设置一个不够大的值的限制。这是因为索引内存不再是只分配给ndb的静态数量。一次(当集群启动时),但现在可以根据需要分配和解除分配。以前,有时增加ldm线程的数量会导致索引内存耗尽,而大量数据内存仍然可用。

    As part of this work, a number of instances of DataMemory usage not directly related to storage of table data now use transaction memory instead.

    作为这项工作的一部分,许多与表数据的存储没有直接关系的数据内存使用实例现在使用事务内存。

    For this reason, it may be necessary on some systems to increase SharedGlobalMemory to allow transaction memory to increase when needed, such as when using NDB Cluster Replication, which requires a great deal of buffering on the data nodes. On systems performing initial bulk loads of data, it may be necessary to break up very large transactions into smaller parts.

    因此,在某些系统上可能需要增加sharedglobalmemory,以允许在需要时增加事务内存,例如在使用ndb集群复制时,这需要在数据节点上进行大量缓冲。在执行初始大容量数据加载的系统上,可能需要将非常大的事务分解为较小的部分。

    In addition, data nodes now generate MemoryUsage events (see Section 21.5.6.2, “NDB Cluster Log Events”) and write appropriate messages in the cluster log when resource usage reaches 99%, as well as when it reaches 80%, 90%, or 100%, as before.

    此外,数据节点现在生成memoryUsage事件(请参阅第21.5.6.2节“ndb集群日志事件”),并在资源使用率达到99%以及达到80%、90%或100%时在集群日志中写入适当的消息。

    Other related changes are listed here:

    其他相关变更如下:

    • IndexMemory is no longer one of the values displayed in the ndbinfo.memoryusage table's memory_type column; is also no longer displayed in the output of ndb_config.

      indexMemory不再是ndbinfo.memoryUsage表的memory_type列中显示的值之一;也不再显示在ndb_config的输出中。

    • REPORT MEMORYUSAGE and other commands which expose memory consumption now shows index memory consumption using 32K pages (previously these were 8K pages).

      显示内存消耗的report memoryusage和其他命令现在显示使用32k页(以前是8k页)的索引内存消耗。

    • The ndbinfo.resources table now shows the DISK_OPERATIONS resource as TRANSACTION_MEMORY, and the RESERVED resource has been removed.

      现在,ndbinfo.resources表将磁盘操作资源显示为事务内存,并且保留的资源已被删除。

  • ndbinfo processes and config_nodes tables.  NDB 7.6.2 adds two tables to the ndbinfo information database to provide information about cluster nodes; these tables are listed here:

    ndbinfo进程和配置节点表。ndb 7.6.2在ndbinfo信息数据库中添加了两个表,以提供有关群集节点的信息;这些表如下所示:

    • config_nodes: This table the node ID, process type, and host name for each node listed in an NDB cluster's configuration file.

      配置节点:此表是ndb集群配置文件中列出的每个节点的节点id、进程类型和主机名。

    • The processes shows information about nodes currently connected to the cluster; this information includes the process name and system process ID; for each data node and SQL node, it also shows the process ID of the node's angel process. In addition, the table shows a service address for each connected node; this address can be set in NDB API applications using the Ndb_cluster_connection::set_service_uri() method, which is also added in NDB 7.6.2.

      进程显示当前连接到集群的节点的信息;该信息包括进程名和系统进程ID;对于每个数据节点和SQL节点,它还显示节点的天使进程的进程ID。此外,表中还显示了每个连接节点的服务地址;可以使用ndb_cluster_connection::set_service_uri()方法在ndb api应用程序中设置此地址,ndb 7.6.2中也添加了此方法。

  • System name.  The system name of an NDB cluster can be used to identify a specific cluster. Beginning with NDB 7.6.2, the MySQL Server shows this name as the value of the Ndb_system_name status variable; NDB API applications can use the Ndb_cluster_connection::get_system_name() method which is added in the same release.

    系统名称。ndb集群的系统名称可用于标识特定集群。从ndb 7.6.2开始,mysql服务器将此名称显示为ndb_system_name状态变量的值;ndb api应用程序可以使用在同一版本中添加的ndb_cluster_connection::get_system_name()方法。

    A system name based on the time the management server was started is generated automatically>; you can override this value by adding a [system] section to the cluster's configuration file and setting the Name parameter to a value of your choice in this section, prior to starting the management server.

    基于管理服务器启动时间的系统名称将自动生成>;您可以在启动管理服务器之前,通过在群集的配置文件中添加[system]节并将name参数设置为您在此节中选择的值来覆盖此值。

  • Improved GUI installer.  The NDB Cluster Auto-Installer has been enhanced in a number of respects, as described in the following list:

    改进的gui安装程序。ndb集群自动安装程序在许多方面得到了增强,如下表所示:

    • The installer now provides persistent storage in an encrypted .mcc file as an alternative to cookie-based storage. Persistent storage is now used by default.

      安装程序现在在加密的.mcc文件中提供持久存储,以替代基于cookie的存储。现在默认使用持久性存储。

    • The installer now uses secure (HTTPS) connections by default between the browser client and the web server backend.

      默认情况下,安装程序现在在浏览器客户端和web服务器后端之间使用安全(https)连接。

    • The Paramiko security library used by the installer has been upgraded to version 2. Other improvements in the installer's SSH functionality include the ability to use passwords for encrypted private keys and to use different credentials with different hosts.

      安装程序使用的paramiko安全库已升级到版本2。安装程序的ssh功能的其他改进包括能够对加密的私钥使用密码,以及在不同的主机上使用不同的凭据。

    • Retrieval of host information has been improved, and the installer now provides accurate figures for the amount of disk space available on hosts.

      主机信息的检索得到了改进,安装程序现在提供了主机上可用磁盘空间量的准确数字。

    • Configuration has been improved, with most node parameters now available for setting in the GUI. In addition, parameters whose permitted values are enumerated have those values displayed for selection when setting them. It is also now possible to toggle the display of advanced configuration parameters on a global or per-node basis.

      配置得到了改进,现在大多数节点参数都可以在gui中设置。此外,枚举其允许值的参数在设置时会显示这些值以供选择。现在还可以在全局或每个节点的基础上切换高级配置参数的显示。

    For more details and usage information, see Section 21.2.2, “The NDB Cluster Auto-Installer (NDB 7.6)”.

    有关更多详细信息和用法信息,请参阅21.2.2节,“ndb群集自动安装程序(ndb 7.6)”。

  • ndb_import CSV import tool.  ndb_import, added in NDB Cluster 7.6.2, loads CSV-formatted data directly into an NDB table using the NDB API (a MySQL server is needed only to create the table and database in which it is located). ndb_import can be regarded as an analog of mysqlimport or the LOAD DATA INFILE SQL statement, and supports many of the same or similar options for formatting of the data.

    ndb_导入csv导入工具。添加在ndb cluster 7.6.2中的ndb_import使用ndb api将csv格式的数据直接加载到ndb表中(只需要一个mysql服务器来创建它所在的表和数据库)。ndb_import可以看作是mysqlimport或load data infile sql语句的模拟,并且支持许多相同或类似的数据格式化选项。

    Assuming that the database and target NDB table exist, ndb_import needs only a connection to the cluster's management server (ndb_mgmd) to perform the importation; for this reason, there must be an [api] slot available to the tool in the cluster's config.ini file purpose.

    假设数据库和目标NDB表存在,NdBy导入只需要连接到集群管理服务器(NdByMGMD)来执行导入;为此,必须有一个[API]槽可用在集群的CONTION.IN文件中的工具中。

    See Section 21.4.14, “ndb_import — Import CSV Data Into NDB”, for more information.

    有关详细信息,请参阅第21.4.14节“ndb_导入-将csv数据导入ndb”。

  • ndb_top monitoring tool.  Added the ndb_top utility, which shows CPU load and usage information for an NDB data node in real time. This information can be displayed in text format, as an ASCII graph, or both. The graph can be shown in color, or using grayscale.

    ndb_顶部监控工具。添加了ndb_top实用程序,它实时显示ndb数据节点的cpu负载和使用信息。此信息可以文本格式、ascii图形或两者都显示。图形可以用颜色显示,也可以用灰度显示。

    ndb_top connects to an NDB Cluster SQL node (that is, a MySQL Server). For this reason, the program must be able to connect as a MySQL user having the SELECT privilege on tables in the ndbinfo database.

    ndb_u top连接到ndb集群sql节点(即mysql服务器)。因此,该程序必须能够以对ndbinfo数据库中的表具有select权限的mysql用户身份进行连接。

    ndb_top is available for Linux, Solaris, and Mac OS X platforms beginning with NDB 7.6.3. It is not currently available for Windows platforms.

    ndb_u top可用于从ndb 7.6.3开始的linux、solaris和mac os x平台。它目前不适用于windows平台。

    For more information, see Section 21.4.30, “ndb_top — View CPU usage information for NDB threads”.

    有关更多信息,请参阅21.4.30节,“ndb_top-查看ndb线程的cpu使用信息”。

  • Code cleanup.  A significant number of debugging statements and printouts not necessary for normal operations have been moved into code used only when testing or debugging NDB, or dispensed with altogether. This removal of overhead should result in a noticeable improvement in the performance of LDM and TC threads on the order of 10% in many cases.

    代码清理。大量正常操作不需要的调试语句和打印输出被转移到只在测试或调试ndb时使用的代码中,或者完全不需要。在许多情况下,这种开销的消除将显著提高ldm和tc线程的性能,大约为10%。

  • LDM thread and LCP improvements.  Previously, when a local data management thread experienced I/O lag, it wrote to local checkpoints more slowly. This could happen, for example, during a disk overload condition. Problems could occur because other LDM threads did not always observe this state, or do likewise. NDB now tracks I/O lag mode globally, so that this state is reported as soon as at least one thread is writing in I/O lag mode; it then makes sure that the reduced write speed for this LCP is enforced for all LDM threads for the duration of the slowdown condition. Because the reduction in write speed is now observed by other LDM instances, overall capacity is increased; this enables the disk overload (or other condition inducing I/O lag) to be overcome more quickly in such cases than it was previously.

    LDM线程和LCP改进。以前,当本地数据管理线程遇到I/O延迟时,它向本地检查点的写入速度会慢一些。例如,在磁盘过载情况下,可能会发生这种情况。可能会出现问题,因为其他ldm线程并不总是观察到这种状态,或者执行类似的操作。ndb现在全局跟踪i/o延迟模式,以便在至少一个线程以i/o延迟模式写入时立即报告此状态;然后,它确保在减速条件期间对所有ldm线程强制执行此lcp的降低写入速度。由于其他ldm实例现在可以观察到写入速度的降低,因此总容量增加;这使得在这种情况下,比以前更快地克服磁盘过载(或导致i/o延迟的其他条件)。

  • NDB error identification.  Error messages and information can be obtained using the mysql client in NDB 7.6.4 and later from a new error_messages table in the ndbinfo information database. In addition, the 7.6.4 release introduces a command-line client ndb_perror for obtaining information from NDB error codes; this replaces using perror with --ndb, which is now deprecated and subject to removal in a future release.

    ndb错误识别。可以使用ndb 7.6.4中的mysql客户端以及更高版本从ndbinfo信息数据库中的一个新的错误信息表中获取错误信息和信息。此外,7.6.4版本引入了一个命令行客户机ndb_perror,用于从ndb错误代码获取信息;这将perror替换为--ndb,后者现在已被弃用,在将来的版本中将被删除。

    For more information, see Section 21.5.10.21, “The ndbinfo error_messages Table”, and Section 21.4.17, “ndb_perror — Obtain NDB Error Message Information”.

    有关详细信息,请参阅第21.5.10.21节“ndbinfo error_messages表”和第21.4.17节“ndb_perror-获取ndb错误消息信息”。

  • SPJ improvements.  When executing a scan as a pushed join (that is, the root of the query is a scan), the DBTC block sends an SPJ request to a DBSPJ instance on the same node as the fragment to be scanned. Formerly, one such request was sent for each of the node's fragments. As the number of DBTC and DBSPJ instances is normally set less than the number of LDM instances, this means that all SPJ instances were involved in the execution of a single query, and, in fact, some SPJ instances could (and did) receive multiple requests from the same query. In NDB 7.6.4, it becomes possible for a single SPJ request to handle a set of root fragments to be scanned, so that only a single SPJ request (SCAN_FRAGREQ) needs to be sent to any given SPJ instance (DBSPJ block) on each node.

    SPJ改进。当作为一个推式连接执行扫描(即,查询的根是一个扫描)时,dbtc块将spj请求发送到与要扫描的片段位于同一节点上的dbspj实例。以前,为每个节点的片段发送一个这样的请求。由于dbtc和dbspj实例的数量通常设置为小于ldm实例的数量,这意味着所有spj实例都参与了一个查询的执行,事实上,一些spj实例可以(并且确实)从同一个查询接收多个请求。在ndb 7.6.4中,一个spj请求可以处理一组要扫描的根片段,因此只需要向每个节点上的任何给定spj实例(dbspj块)发送一个spj请求(scan_fragreq)。

    Since DBSPJ consumes a relatively small amount of the total CPU used when evaluating a pushed join, unlike the LDM block (which is repsonsible for the majority of the CPU usage), introducing multiple SPJ blocks adds some parallelism, but the additional overhead also increases. By enabling a single SPJ request to handle a set of root fragments to be scanned, such that only a single SPJ request is sent to each DBSPJ instance on each node and batch sizes are allocated per fragment, the multi-fragment scan can obtain a larger total batch size, allowing for some scheduling optimizations to be done within the SPJ block, which can scan a single fragment at a time (giving it the total batch size allocation), scan all fragments in parallel using smaller sub-batches, or some combination of the two.

    由于dbspj在计算pushd join时占用的cpu总量相对较少,与ldm块不同(ldm块对大多数cpu使用是可重复的),引入多个spj块会增加一些并行性,但额外的开销也会增加。通过启用单个spj请求来处理一组要扫描的根片段,这样每个节点上的每个dbspj实例只发送一个spj请求,并且每个片段分配批大小,多片段扫描可以获得更大的总批大小,从而允许在spj块内进行一些调度优化,这是可以一次扫描一个片段(为它分配总的批大小),使用较小的子批并行扫描所有片段,或两者的某种组合。

    This work is expected to increase performance of pushed-down joins for the following reasons:

    由于以下原因,此工作有望提高下推连接的性能:

    • Since multiple root fragments can be scanned for each SPJ request, it is necessary to request fewer SPJ instances when executing a pushed join

      由于可以为每个spj请求扫描多个根片段,因此在执行push join时需要请求更少的spj实例

    • Increased available batch size allocation, and for each fragment, should also in most cases result in fewer requests being needed to complete a join

      在大多数情况下,增加可用的批处理大小分配和每个片段的分配也会减少完成连接所需的请求

  • Improved O_DIRECT handling for redo logs.  NDB 7.6.4 implements a new data node configuration parameter ODirectSyncFlag which causes completed redo log writes using O_DIRECT to be handled as fsync calls. ODirectSyncFlag is disabled by default; to enable it, set it to true.

    改进了重做日志的o_直接处理。ndb 7.6.4实现了一个新的数据节点配置参数odirectsyncflag,它将使用o_direct完成的重做日志写入作为fsync调用处理。默认情况下,ODirectSyncFlag被禁用;若要启用它,请将其设置为true。

    You should bear in mind that the setting for this parameter is ignored when at least one of the following conditions is true:

    应记住,当至少满足以下条件之一时,将忽略此参数的设置:

    • ODirect is not enabled.

      未启用odirect。

    • InitFragmentLogFiles is set to SPARSE.

      InitFragmentLogFiles被设置为稀疏。

  • Locking of CPUs to offline index build threads.  In NDB 7.6.4 and later, offline index builds by default use all cores available to ndbmtd, instead of being limited to the single core reserved for the I/O thread. It also becomes possible to specify a desired set of cores to be used for I/O threads performing offline multithreaded builds of ordered indexes. This can improve restart and restore times and performance, as well as availability.

    将CPU锁定到脱机索引生成线程。在ndb 7.6.4及更高版本中,离线索引构建在默认情况下使用ndbmtd可用的所有核心,而不限于为i/o线程保留的单个核心。还可以指定用于I/O线程的所需内核集,以执行顺序索引的脱机多线程构建。这可以提高重启和恢复时间、性能以及可用性。

    Note

    Offline as used here refers to an ordered index build that takes place while a given table is not being written to. Such index builds occur during a node or system restart, or when restoring a cluster from backup using ndb_restore --rebuild-indexes.

    这里使用的“脱机”是指在不写入给定表时发生的有序索引生成。此类索引生成发生在节点或系统重新启动期间,或者使用ndb_restore—rebuild索引从备份还原群集时。

    This improvement involves several related changes. The first of these is to change the default value for the BuildIndexThreads configuration parameter (from 0 to 128), means that offline ordered index builds are now multithreaded by default. The default value for the TwoPassInitialNodeRestartCopy is also changed (from false to true), so that an initial node restart first copies all data without any creation of indexes from a live node to the node which is being started, builds the ordered indexes offline after the data has been copied, then again synchronizes with the live node; this can significantly reduce the time required for building indexes. In addition, to facilitate explicit locking of offline index build threads to specific CPUs, a new thread type (idxbld) is defined for the ThreadConfig configuration parameter.

    这种改进涉及到几个相关的变化。第一个是更改buildIndexThreads配置参数的默认值(从0更改为128),这意味着脱机排序的索引生成现在在默认情况下是多线程的。twopassinitialnoderestartcopy的默认值也被更改(从false更改为true),这样初始节点restart首先将所有数据从“活动”节点复制到正在启动的节点,而无需创建任何索引,在复制数据后脱机构建有序索引,然后再次与活动节点同步;这可以显著减少构建索引所需的时间。此外,为了方便将脱机索引生成线程显式锁定到特定CPU,为threadconfig配置参数定义了新的线程类型(idxbld)。

    As part of this work, NDB can now distinguish between execution thread types and other types of threads, and between types of threads which are permanently assigned to specific tasks, and those whose assignments are merely temporary.

    作为这项工作的一部分,ndb现在可以区分执行线程类型和其他类型的线程,以及永久分配给特定任务的线程类型和仅分配给临时任务的线程类型。

    NDB 7.6.4 also introduces the nosend parameter for ThreadCOnfig. By setting this to 1, you can keep a main, ldm, rep, or tc thread from assisting the send threads. This parameter is 0 by default, and cannot be used with I/O threads, send threads, index build threads, or watchdog threads.

    ndb 7.6.4还为threadconfig引入了nosend参数。通过将此设置为1,可以使main、ldm、rep或tc线程不协助发送线程。此参数默认为0,不能与I/O线程、发送线程、索引生成线程或看门狗线程一起使用。

    For additonal information, see the descriptions of the parameters.

    有关其他信息,请参见参数的说明。

  • Variable batch sizes for DDL bulk data operations.  As part of work ongoing to optimize bulk DDL performance by ndbmtd, it is now possible to obtain performance improvements by increasing the batch size for the bulk data parts of DDL operations processing data using scans. Batch sizes are now made configurable for unique index builds, foreign key builds, and online reorganization, by setting the respective data node configuration parameters listed here:

    DDL批量数据操作的可变批处理大小。作为ndbmtd优化大容量ddl性能的工作的一部分,现在可以通过增加使用扫描处理数据的ddl操作的大容量数据部分的批大小来获得性能改进。通过设置下面列出的相应数据节点配置参数,现在可以为唯一索引生成、外键生成和联机重组配置批大小:

    • MaxUIBuildBatchSize: Maximum scan batch size used for building unique keys.

      MaxUIBuildBatchSize:用于构建唯一密钥的最大扫描批量大小。

    • MaxFKBuildBatchSize: Maximum scan batch size used for building foreign keys.

      MaxFKBuildBatchSize:用于构建外键的最大扫描批量大小。

    • MaxReorgBuildBatchSize: Maximum scan batch size used for reorganization of table partitions.

      MaxReorgBuildBatchSize:用于表分区重组的最大扫描批量大小。

    For each of the parameters just listed, the default value is 64, the minimum is 16, and the maximum is 512.

    对于刚才列出的每个参数,默认值是64,最小值是16,最大值是512。

    Increasing the appropriate batch size or sizes can help amortize inter-thread and inter-node latencies and make use of more parallel resources (local and remote) to help scale DDL performance. In each case there can be a tradeoff with ongoing traffic.

    增加适当的批处理大小有助于分摊线程间和节点间的延迟,并利用更多的并行资源(本地和远程)来帮助扩展ddl性能。在每种情况下,都可以与正在进行的流量进行权衡。

  • Partial LCPs.  NDB 7.6.4 implements partial local checkpoints. Formerly, an LCP always made a copy of the entire database. When working with terabytes of data this process could require a great deal of time, with an adverse impact on node and cluster restarts especially, as well as more space for the redo logs. It is now no longer strictly necessary for LCPs to do this—instead, an LCP now by default saves only a number of records that is based on the quantity of data changed since the previous LCP. This can vary between a full checkpoint and a checkpoint that changes nothing at all. In the event that the checkpoint reflects any changes, the minimum is to write one part of the 2048 making up a local LCP.

    部分LCP。ndb 7.6.4实现部分本地检查点。以前,LCP总是复制整个数据库。当使用兆字节的数据时,这个过程可能需要大量的时间,尤其是对节点和集群的重新启动有不利影响,并且需要更多的空间来存储重做日志。现在,lcp不再严格需要这样做,而是默认情况下,lcp只保存基于自上一个lcp以来更改的数据量的多个记录。这可以在完全检查点和完全不改变任何内容的检查点之间变化。在检查点反映任何更改的情况下,最小值是写入组成本地LCP的2048的一部分。

    As part of this change, two new data node configuration parameters are inroduced in this release: EnablePartialLcp (default true, or enabled) enables partial LCPs. RecoveryWork controls the percentage of space given over to LCPs; it increases with the amount of work which must be performed on LCPs during restarts as opposed to that performed during normal operations. Raising this value causes LCPs during normal operations to require writing fewer records and so decreases the usual workload. Raising this value also means that restarts can take longer.

    作为此更改的一部分,此版本中引入了两个新的数据节点配置参数:enablepartiallcp(默认为true或enabled)启用部分lcp。RecoveryWork控制分配给LCP的空间百分比;它随着在重新启动期间必须在LCP上执行的工作量而增加,而不是在正常操作期间执行的工作量。提高这个值会导致在正常操作期间lcp需要写更少的记录,从而减少通常的工作负载。提高此值还意味着重新启动可能需要更长的时间。

    You must disable partial LCPs explicitly by setting EnablePartialLcp=false. This uses the least amount of disk, but also tends to maximize the write load for LCPs. To optimize for the lowest workload on LCPs during normal operation, use EnablePartialLcp=true and RecoveryWork=100. To use the least disk space for partial LCPs, but with bounded writes, use EnablePartialLcp=true and RecoveryWork=25, which is the minimum for RecoveryWork. The default is EnablePartialLcp=true with RecoveryWork=50, which means LCP files require approximately 1.5 times DataMemory; using CompressedLcp=1, this can be further reduced by half. Recovery times using the default settings should also be much faster than when EnablePartialLcp is set to false.

    必须通过设置enablepartiallcp=false显式禁用部分lcp。这使用了最少的磁盘,但也趋向于最大化LCPS的写入负载。要在正常操作期间优化LCP上的最低工作负载,请使用enablepartiallcp=true和recoverywork=100。要为部分lcp使用最小的磁盘空间,但对有界写入,请使用enablepartiallcp=true和recoverywork=25,这是recoverywork的最小值。默认是EnabePealAlpCp= Trror具有恢复工作=50,这意味着LCP文件需要大约1.5倍DATAMEMORY;使用压缩SELCP=1,这可以进一步减少一半。使用默认设置的恢复时间也应该比enablepartiallcp设置为false时快得多。

    Note

    The default value for RecoveryWork was increased from 50 to 60 in NDB 7.6.5.

    在ndb 7.6.5中,recoverywork的默认值从50增加到60。

    In addition the data node configuration parameters BackupDataBufferSize, BackupWriteSize, and BackupMaxWriteSize are all now deprecated, and subject to removal in a future release of MySQL NDB Cluster.

    此外,数据节点配置参数backupdateabuffersize、backupwritesize和backupmaxwritesize现在都已弃用,并将在mysql ndb cluster的未来版本中删除。

    As part of this enhancement, work has been done to correct several issues with node restarts wherein it was possible to run out of undo log in various situations, most often when restoring a node that had been down for a long time during a period of intensive write activity.

    作为此增强的一部分,已经完成了一些工作,以纠正节点重新启动时的几个问题,其中在各种情况下可能会耗尽撤消日志,最常见的情况是在还原在密集的写入活动期间长时间关闭的节点时。

    Additional work was done to improve data node survival of long periods of synchronization without timing out, by updating the LCP watchdog during this process, and keeping better track of the progress of disk data synchronization. Previously, there was the possibility of spurious warnings or even node failures if synchronization took longer than the LCP watchdog timeout.

    通过在同步过程中更新lcp看门狗,更好地跟踪磁盘数据同步的进程,在不超时的情况下提高了数据节点在长时间同步中的生存率。以前,如果同步时间超过lcp监视程序超时,则可能出现虚假警告,甚至节点故障。

    Important

    When upgrading an NDB Cluster that uses disk data tables to NDB 7.6.4 or downgrading it from NDB 7.6.4, it is necessary to restart all data nodes with --initial.

    将使用磁盘数据表的ndb群集升级到ndb 7.6.4或从ndb7.6.4降级时,必须使用--initial重新启动所有数据节点。

  • Parallel undo log record processing.  Formerly, the data node LGMAN kernel block processed undo log records serially; now this is done in parallel. The rep thread, which hands off undo records to LDM threads, waited for an LDM to finish applying a record before fetching the next one; now the rep thread no longer waits, but proceeds immediately to the next record and LDM.

    并行撤消日志记录处理。以前,数据节点lgman内核块串行处理undo日志记录;现在这是并行处理的。rep线程将撤消记录交给ldm线程,它等待ldm在获取下一条记录之前完成应用记录;现在rep线程不再等待,而是立即前进到下一条记录和ldm。

    A count of the number of outstanding log records for each LDM in LGMAN is kept, and decremented whenever an LDM has completed the execution of a record. All the records belonging to a page are sent to the same LDM thread but are not guaranteed to be processed in order, so a hash map of pages that have outstanding records maintains a queue for each of these pages. When the page is available in the page cache, all records pending in the queue are applied in order.

    lgman中每个ldm的未完成日志记录数的计数将被保留,并在ldm完成记录的执行时递减。属于一个页面的所有记录都被发送到同一个ldm线程,但不能保证按顺序处理,因此具有未完成记录的页面的哈希映射为这些页面中的每个页面维护一个队列。当页面在页面缓存中可用时,将按顺序应用队列中挂起的所有记录。

    A few types of records continue to be processed serially: UNDO_LCP, UNDO_LCP_FIRST, UNDO_LOCAL_LCP, UNDO_LOCAL_LCP_FIRST, UNDO_DROP, and UNDO_END.

    有几种类型的记录继续按顺序处理:undo_lcp、undo_lcp_first、undo_local_lcp、undo_local_lcp_first、undo_drop和undo_end。

    There are no user-visible changes in functionality directly associated with this performance enhancement; it is part of the work being done to improve undo long handling in support of partial local checkpoints in NDB Cluster 7.6.4.

    与此性能增强直接相关的功能中没有用户可见的更改;这是改进撤消长时间处理以支持ndb群集7.6.4中的部分本地检查点的工作的一部分。

  • Reading table and fragment IDs from extent for undo log applier.  When applying an undo log, it is necessary to obtain the table ID and fragment ID from the page ID. This was done previously by reading the page from the PGMAN kernel block using an extra PGMAN worker thread, but when applying the undo log it was necessary to read the page again.

    正在从撤消日志应用程序的扩展数据块读取表和片段ID。应用撤消日志时,需要从页ID获取表ID和片段ID。这是以前通过使用额外的pgman工作线程从pgman内核块读取页来完成的,但在应用撤消日志时,需要再次读取页。

    when using O_DIRECT this was very inefficient since the page was not cached in the OS kernel. To correct this issue, mapping from page ID to table ID and fragment ID is now done using information from the extent header the table IDs and fragment IDs for the pages used within a given extent. The extent pages are always present in the page cache, so no extra reads from disk are required for performing the mapping. In addition, the information can already be read, using existing TSMAN kernel block data structures.

    当使用o_direct时,这是非常低效的,因为页面没有缓存在os内核中。要更正此问题,现在可以使用来自数据块头的信息完成从页id到表id和片段id的映射,这些信息来自给定数据块内使用的页的表id和片段id。扩展页始终存在于页缓存中,因此执行映射不需要从磁盘进行额外的读取。此外,可以使用现有的TMSN内核块数据结构来读取信息。

    See the description of the ODirect data node configuration parameter, for more information.

    有关详细信息,请参见odirect数据节点配置参数的说明。

  • NDB Cluster Auto-Installer improvements.  In NDB 7.6.4, node configuration parameters, their default values, and their documentation as found in the Auto-Installer have been better aligned with those found in NDB Cluster software releases. SSH support and configuration have also been improved. In addition, HTTPS is now used by default for Web connections, and cookies are no longer employed as a persistent data storage mechanism. More information about these and other changes in the Auto-Installer is given in the next several paragraphs.

    ndb集群自动安装程序改进。在ndb 7.6.4中,节点配置参数、它们的默认值以及在自动安装程序中找到的文档与ndb集群软件版本中找到的更好地一致。ssh支持和配置也得到了改进。此外,https现在默认用于web连接,cookies不再用作持久数据存储机制。关于这些和自动安装程序中的其他更改的更多信息将在接下来的几段中给出。

    The Auto-Installer now implements a mechanism for setting configuration parameters that take discrete values. For example, the data node parameter Arbitration must now be set to one of its allowed values Default, Disabled, or WaitExternal.

    自动安装程序现在实现了一种设置获取离散值的配置参数的机制。例如,数据节点参数仲裁现在必须设置为其允许的值之一default、disabled或waitexternal。

    The Auto-Installer also now gets and shows the amount of disk space available per host to the cluster (as DiskFree), using this information to obtain realistic values for configuration parameters that depend on it.

    自动安装程序现在还可以获取并显示集群中每个主机的可用磁盘空间量(作为diskfree),并使用此信息获取依赖于它的配置参数的实际值。

    Secure connection support in the MySQL NDB Cluster Auto-Installer has been updated or improved in NDB Cluster 7.6.4 as follows:

    mysql ndb cluster auto installer中的安全连接支持在ndb cluster 7.6.4中更新或改进如下:

    • Added a mechanism for setting SSH membership for each host.

      添加了为每个主机设置ssh成员身份的机制。

    • Updated the Paramiko Python module to the latest available version (2.6.1).

      将paramiko python模块更新为最新的可用版本(2.6.1)。

    • Provided a place in the GUI for encrypted private key passwords, and discontinued use of hardcoded passwords such as Password=None.

      在gui中为加密的私钥密码提供一个位置,并停止使用硬编码密码,如password=none。

    Other enhancements relating to data security that are implemented in NDB 7.6.4 include the following:

    在ndb 7.6.4中实现的与数据安全相关的其他增强包括:

    • Discontinued use of cookies as a persistent store of NDB Cluster configuration information; these were not secure and came with a hard upper limit on storage. Now the Auto-Installer uses an encrypted file for this purpose.

      停止使用cookies作为ndb群集配置信息的永久存储;这些cookies不安全,并且具有存储的硬上限。现在,自动安装程序将为此目的使用加密文件。

    • In order to secure data transfer between the JavaScript front end in the user's web browser and the Python web server on the back end, the default communications protocol for this has been switched from HTTP to HTTPS.

      为了保护用户web浏览器中javascript前端和后端pythonweb服务器之间的数据传输,默认的通信协议从http转换为http s。

    See Section 21.2.1, “The NDB Cluster Auto-Installer (NDB 7.5)”, for more information.

    有关详细信息,请参阅第21.2.1节“ndb群集自动安装程序(ndb 7.5)”。

  • Shared memory transporter.  User-defined shared memory (SHM) connections between a data node and an API node on the same host computer are supported in NDB 7.6.6 and later, and are no longer considered experimental. You can enable an explicit shared memory connection by setting the UseShm configuration parameter to 1 for the relevant data node. When explicitly defining shared memory as the connection method, it is also necessary that both the data node and the API node are identified by HostName.

    共享内存传输程序。ndb 7.6.6及更高版本支持在同一主机上的数据节点和api节点之间的用户定义的共享内存(shm)连接,并且不再被认为是实验性的。通过将相关数据节点的useshm配置参数设置为1,可以启用显式共享内存连接。当显式地将共享内存定义为连接方法时,还需要使用主机名来标识数据节点和api节点。

    Performance of SHM connections can be enhanced through setting parameters such as ShmSize, ShmSpintime, and SendBufferMemory in an [shm] or [shm default] section of the cluster configuration file (config.ini). Configuration of SHM is otherwise similar to that of the TCP transporter.

    可以通过在群集配置文件(config.ini)的[shm]或[shm default]部分设置shmsize、shmspintime和sendbuffermemory等参数来提高shm连接的性能。SHM的配置与TCP传输程序的配置类似。

    The SigNum parameter is not used in the new SHM implementation, and any settings made for it are now ignored. Section 21.3.3.12, “NDB Cluster Shared Memory Connections”, provides more information about these parameters. In addition, as part of this work, NDB code relating to the old SCI transporter has been removed.

    signum参数未在新的shm实现中使用,为它所做的任何设置现在都将被忽略。第21.3.3.12节“ndb集群共享内存连接”提供了有关这些参数的更多信息。此外,作为这项工作的一部分,与旧的sci转运体相关的ndb代码已经被删除。

    For more information, see Section 21.3.3.12, “NDB Cluster Shared Memory Connections”.

    有关更多信息,请参阅第21.3.3.12节“ndb集群共享内存连接”。

  • SPJ block inner join optimization.  In NDB 7.6.6 and later, the SPJ kernel block can take into account when it is evaluating a join request in which at least some of the tables are INNER-joined. This means that it can eliminate requests for row, ranges, or both as soon as it becomes known that one or more of the preceding requests did not return any results for a parent row. This saves both the data nodes and the SPJ block from having to handle requests and result rows which never take part in an INNER-joined result row.

    spj块内部连接优化。在ndb 7.6.6及更高版本中,spj内核块在计算至少有一些表是内部连接的连接请求时可以将其考虑在内。这意味着,一旦知道前面的一个或多个请求没有返回父行的任何结果,它就可以消除对行、范围或两者的请求。这样就避免了数据节点和spj块必须处理从不参与内部联接结果行的请求和结果行。

    Consider this join query, where pk is the primary key on tables t2, t3, and t4, and columns x, y, and z are nonindexed columns:

    考虑这个连接查询,其中pk是表t2、t3和t4上的主键,而x、y和z列是非索引列:

    SELECT * FROM t1
      JOIN t2 ON t2.pk = t1.x
      JOIN t3 ON t3.pk = t1.y
      JOIN t4 ON t4.pk = t1.z;
    

    Previously, this resulted in an SPJ request including a scan on table t1, and lookups on each of the tables t2, t3, and t4; these were evaluated for every row returned from t1. For these, SPJ created LQHKEYREQ requests for tables t2, t3, and t4. Now SPJ takes into consideration the requirement that, to produce any result rows, an inner join must find a match in all tables joined; as soon as no matches are found for one of the tables, any further requests to tables having the same parent or tables are now skipped.

    以前,这导致spj请求,包括对表t1的扫描,以及对表t2、t3和t4中的每一个表的查找;这些都是针对从t1返回的每一行计算的。为此,spj为表t2、t3和t4创建了lqhkeyreq请求。现在spj考虑到了这样一个要求:要生成任何结果行,内部联接必须在所有联接的表中找到匹配项;一旦没有找到其中一个表的匹配项,就立即跳过对具有相同父表的表的任何进一步请求。

    Note

    This optimization cannot be applied until all of the data nodes and all of the API nodes in the cluster have been upgraded to NDB 7.6.6 or later.

    在集群中的所有数据节点和所有api节点都升级到ndb 7.6.6或更高版本之前,无法应用此优化。

  • NDB wakeup thread.  NDB uses a poll receiver to read from sockets, to execute messages from the sockets, and to wake up other threads. When making only intermittent use of a receive thread, poll ownership is given up before starting to wake up other threads, which provides some degree of parallelism in the receive thread, but, when making constant use of the receive thread, the thread can be overburdened by tasks including wakeup of other threads.

    ndb唤醒线程。ndb使用轮询接收器读取套接字,执行来自套接字的消息,并唤醒其他线程。当只间歇使用接收线程时,在唤醒其他线程之前会放弃轮询所有权,这会在接收线程中提供某种程度的并行性,但是,当持续使用接收线程时,线程可能会被包括唤醒其他线程在内的任务负担过重。

    NDB 7.6.6 and later supports offloading by the receiver thread of the task of waking up other threads to a new thread that wakes up other threads on request (and otherwise simply sleeps), making it possible to improve the capacity of a single cluster connection by roughly ten to twenty percent.

    ndb 7.6.6及更高版本支持接收器线程将唤醒其他线程的任务卸载到一个新线程,该线程根据请求唤醒其他线程(否则只是休眠),从而可以将单个群集连接的容量提高大约10%到20%。

  • Adaptive LCP control.  NDB 7.6.7 implements an adaptive LCP control mechanism which acts in response to changes in redo log space usage. By controlling LCP disk write speed, you can help protect against a number of resource-related issues, including the following:

    自适应LCP控制。ndb 7.6.7实现了一种自适应的lcp控制机制,该机制对重做日志空间使用量的变化做出响应。通过控制LCP磁盘写入速度,您可以帮助防止许多与资源相关的问题,包括以下问题:

    • Insufficient CPU resources for traffic applications

      流量应用程序的CPU资源不足

    • Disk overload

      磁盘过载

    • Insufficient redo log buffer

      重做日志缓冲区不足

    • GCP Stop conditions

      GCP停止条件

    • Insufficient redo log space

      重做日志空间不足

    • Insufficient undo log space

      撤消日志空间不足

    This work includes the following changes relating to NDB configuration parameters:

    这项工作包括以下与ndb配置参数有关的更改:

    • The default value of the RecoveryWork data node parameter is increased from 50 to 60; that is, NDB now uses 1.6 times the size of the data for storage of LCPs.

      recoverywork data node参数的默认值从50增加到60;也就是说,ndb现在使用数据大小的1.6倍来存储lcp。

    • A new data node configuration parameter InsertRecoveryWork provides additional tuning capabilities through controlling the percentage of RecoveryWork that is reserved for insert operations. The default value is 40 (that is, 40% of the storage space already reserved by RecoveryWork); the minimum and maximum are 0 and 70, respectively. Increasing this value allows for more writes to be performed during an LCP, while limiting the total size of the LCP. Decreasing InsertRecoveryWork limits the number of writes used during an LCP, but results in more space being used for the LCP, which means that recovery takes longer.

      新的数据节点配置参数insert recoverywork通过控制为插入操作保留的recoverywork的百分比提供了额外的调整功能。默认值为40(即,已回收的工作已经存储的存储空间的40%);最小值和最大值分别为0和70。增加此值允许在lcp期间执行更多写入,同时限制lcp的总大小。减少insertrecoverywork会限制lcp期间使用的写入次数,但会导致lcp使用更多空间,这意味着恢复需要更长的时间。

    This work implements control of LCP speed chiefly to minimize the risk of running out of redo log. This is done in adapative fashion, based on the amount of redo log space used, using the alert levels, with the responses taken when these levels are attained, shown here:

    这项工作主要是为了将重做日志耗尽的风险降到最低。这是以自适应方式完成的,基于使用的重做日志空间量,使用警报级别,并在达到这些级别时执行响应,如下所示:

    • Low: Redo log space usage is greater than 25%, or estimated usage shows insufficient redo log space at a very high transaction rate. In response, use of LCP data buffers is increased during LCP scans, priority of LCP scans is increased, and the amount of data that can be written per real-time break in an LCP scan is also increased.

      低:重做日志空间使用率大于25%,或者估计的使用率显示在非常高的事务速率下重做日志空间不足。作为回应,在lcp扫描期间,lcp数据缓冲区的使用增加了,lcp扫描的优先级增加了,并且lcp扫描中每次实时中断可写入的数据量也增加了。

    • High: Redo log space usage is greater than 40%, or estimate to run out of redo log space at a high transaction rate. When this level of usage is reached, MaxDiskWriteSpeed is increased to the value of MaxDiskWriteSpeedOtherNodeRestart. In addition, the minimum speed is doubled, and priority of LCP scans and what can be written per real-time break are both increased further.

      高:重做日志空间使用率大于40%,或者估计会在高事务速率下耗尽重做日志空间。当达到此使用级别时,maxdiskwritespeed将增加到maxdiskwritespeedothernoderestart的值。此外,最低速度提高了一倍,LCP扫描的优先级和每次实时中断可以写入的内容都进一步提高。

    • Critical: Redo log space usage is greater than 60%, or estimated usage shows insufficient redo log space at a normal transaction rate. At this level, MaxDiskWriteSpeed is increased to the value of MaxDiskWriteSpeedOwnRestart; MinDiskWriteSpeed is also set to this value. Priority of LCP scans and the amount of data that can be written per real-time break are increased further, and the LCP data buffer is completely available during the LCP scan.

      关键:重做日志空间使用率大于60%,或者估计使用率显示正常事务速率下重做日志空间不足。在这个级别上,maxdiskwritespeed增加到maxdiskwritespeedownrestart的值;mindiskwritespeed也设置为这个值。lcp扫描的优先级和每次实时中断可写入的数据量进一步增加,lcp数据缓冲区在lcp扫描期间完全可用。

    Raising the level also has the effect of increasing the calculated target checkpoint speed.

    提高级别还可以提高计算的目标检查点速度。

    LCP control has the following benefits for NDB installations:

    对于ndb安装,lcp控制有以下好处:

    • Clusters should now survive very heavy loads using default configurations much better than previously.

      使用默认配置的集群现在应该比以前更好地承受非常重的负载。

    • It should now be possible for NDB to run reliably on systems where the available disk space is (at a rough minimum) 2.1 times the amount of memory allocated to it (DataMemory). You should note that this figure does not include any disk space used for Disk Data tables.

      现在,ndb应该可以在可用磁盘空间(大致最少)是分配给它的内存(数据内存)2.1倍的系统上可靠地运行。您应该注意,这个数字不包括用于磁盘数据表的任何磁盘空间。

  • ndb_restore options.  Beginning with NDB 7.6.9, the --nodeid and --backupid options are both required when invoking ndb_restore.

    ndb_还原选项。从ndb 7.6.9开始,--nodeid和--backupid选项在调用ndb_restore时都是必需的。

21.1.5 NDB: Added, Deprecated, and Removed Options, Variables, and Parameters

21.1.5.1 Options, Variables, and Parameters Added, Deprecated, or Removed in NDB 7.5

This section contains information about NDB configuration parameters and mysqld options and variables that have been added to, deprecated in, or removed from NDB 7.5.

本节包含有关已添加到ndb 7.5、已在ndb7.5中弃用或已从ndb7.5中删除的ndb配置参数、mysqld选项和变量的信息。

Node Configuration Parameters Introduced in NDB 7.5

The following node configuration parameters have been added in NDB 7.5.

在ndb 7.5中添加了以下节点配置参数。

  • ApiVerbose: Enable NDB API debugging; for NDB development. Added in NDB 7.5.2.

    apivebose:启用ndb api调试;用于ndb开发。在ndb 7.5.2中添加。

Node Configuration Parameters Deprecated in NDB 7.5

The following node configuration parameters have been deprecated in NDB 7.5.

以下节点配置参数在ndb 7.5中已被弃用。

  • ExecuteOnComputer: String referencing an earlier defined COMPUTER. Deprecated as of NDB 7.5.0.

    ExecuteOnComputer:引用早期定义的计算机的字符串。自ndb 7.5.0起已弃用。

  • ExecuteOnComputer: String referencing an earlier defined COMPUTER. Deprecated as of NDB 7.5.0.

    ExecuteOnComputer:引用早期定义的计算机的字符串。自ndb 7.5.0起已弃用。

  • ExecuteOnComputer: String referencing an earlier defined COMPUTER. Deprecated as of NDB 7.5.0.

    ExecuteOnComputer:引用早期定义的计算机的字符串。自ndb 7.5.0起已弃用。

Node Configuration Parameters Removed in NDB 7.5

The following node configuration parameters have been removed in NDB 7.5.

以下节点配置参数已在ndb 7.5中删除。

  • DiskCheckpointSpeed: Bytes allowed to be written by checkpoint, per second. Removed in NDB 7.5.0.

    diskcheckpointspeed:每秒允许检查点写入的字节数。在ndb 7.5.0中删除。

  • DiskCheckpointSpeedInRestart: Bytes allowed to be written by checkpoint during restart, per second. Removed in NDB 7.5.0.

    diskCheckpointSpeedInRestart:允许检查点在重新启动期间每秒写入的字节数。在ndb 7.5.0中删除。

  • Id: Number identifying data node. Now deprecated; use NodeId instead. Removed in NDB 7.5.0.

    id:标识数据节点的编号。现在已弃用;请改用nodeid。在ndb 7.5.0中删除。

  • MaxNoOfSavedEvents: Not used. Removed in NDB 7.5.0.

    MaxNoofSavedEvents:未使用。在ndb 7.5.0中删除。

  • PortNumber: Port used for this SCI transporter (DEPRECATED). Removed in NDB 7.5.1.

    端口号:用于此SCI传输程序的端口(已弃用)。在ndb 7.5.1中删除。

  • PortNumber: Port used for this SHM transporter (DEPRECATED). Removed in NDB 7.5.1.

    端口号:用于此SHM传输程序的端口(已弃用)。在ndb 7.5.1中删除。

  • PortNumber: Port used for this TCP transporter (DEPRECATED). Removed in NDB 7.5.1.

    端口号:用于此TCP传输程序的端口(已弃用)。在ndb 7.5.1中删除。

  • ReservedSendBufferMemory: This parameter is present in NDB code but is not enabled, and is now deprecated. Removed in NDB 7.5.0.

    reservedSendBufferMemory:此参数存在于ndb代码中,但未启用,现在已弃用。在ndb 7.5.0中删除。

MySQL Server Options and Variables Introduced in NDB 7.5

The following mysqld system variables, status variables, and options have been added in NDB 7.5.

以下mysqld系统变量、状态变量和选项已添加到ndb 7.5中。

  • Ndb_system_name: Configured cluster system name; empty if server not connected to NDB. Added in NDB 7.5.7.

    ndb_system_name:已配置群集系统名称;如果服务器未连接到ndb,则为空。在ndb 7.5.7中添加。

  • ndb-allow-copying-alter-table: Set to OFF to keep ALTER TABLE from using copying operations on NDB tables. Added in NDB 7.5.0.

    ndb allow copying alter table:设置为off可防止alter table对ndb表使用复制操作。在ndb 7.5.0中添加。

  • ndb-cluster-connection-pool-nodeids: Comma-separated list of node IDs for connections to the cluster used by MySQL; the number of nodes in the list must be the same as the value set for --ndb-cluster-connection-pool. Added in NDB 7.5.0.

    ndb cluster connection pool node ids:MySQL使用的连接到群集的节点ID的逗号分隔列表;列表中的节点数必须与为--ndb cluster connection pool设置的值相同。在ndb 7.5.0中添加。

  • ndb-default-column-format: Use this value (FIXED or DYNAMIC) by default for COLUMN_FORMAT and ROW_FORMAT options when creating or adding columns to a table.. Added in NDB 7.5.1.

    ndb default column format:在创建列或向表中添加列时,将此值(固定或动态)默认用于column_format和row_format选项。在ndb 7.5.1中添加。

  • ndb-log-update-minimal: Log updates in a minimal format.. Added in NDB 7.5.7.

    最小ndb日志更新:以最小格式更新日志。在ndb 7.5.7中添加。

  • ndb_data_node_neighbour: Specifies cluster data node "closest" to this MySQL Server, for transaction hinting and fully replicated tables. Added in NDB 7.5.2.

    ndb_data_node_neighbor:指定离此mysql服务器“最近”的集群数据节点,用于事务提示和完全复制表。在ndb 7.5.2中添加。

  • ndb_default_column_format: Sets default row format and column format (FIXED or DYNAMIC) used for new NDB tables. Added in NDB 7.5.1.

    ndb_default_column_format:设置用于新ndb表的默认行格式和列格式(固定或动态)。在ndb 7.5.1中添加。

  • ndb_read_backup: Enable read from any replica. Added in NDB 7.5.2.

    ndb_read_backup:启用从任何副本读取。在ndb 7.5.2中添加。

MySQL Server Options and Variables Deprecated in NDB 7.5

No system variables, status variables, or options have been deprecated in NDB 7.5.

在ndb 7.5中,没有系统变量、状态变量或选项被弃用。

MySQL Server Options and Variables Removed in NDB 7.5

No system variables, status variables, or options have been removed from NDB 7.5.

没有从ndb 7.5中删除系统变量、状态变量或选项。

21.1.5.2 Options, Variables, and Parameters Added, Deprecated, or Removed in NDB 7.6

This section contains information about NDB configuration parameters and mysqld options and variables that have been added to, deprecated in, or removed from NDB 7.6.

本节包含有关已添加到ndb 7.6、已在ndb7.6中弃用或已从ndb7.6中删除的ndb配置参数、mysqld选项和变量的信息。

Node Configuration Parameters Introduced in NDB 7.6

The following node configuration parameters have been added in NDB 7.6.

在ndb 7.6中添加了以下节点配置参数。

  • EnablePartialLcp: Enable partial LCP (true); if this is disabled (false), all LCPs write full checkpoints. Added in NDB 7.6.4.

    enable partial lcp:启用部分lcp(true);如果禁用(false),则所有lcp都会写入完整的检查点。在ndb 7.6.4中添加。

  • EnableRedoControl: Enable adaptive checkpointing speed for controlling redo log usage. Added in NDB 7.6.7.

    enableredocontrol:启用自适应检查点速度以控制重做日志的使用。在ndb 7.6.7中添加。

  • InsertRecoveryWork: Percentage of RecoveryWork used for inserted rows; has no effect unless partial local checkpoints are in use. Added in NDB 7.6.5.

    insertrecoverywork:用于插入行的recoverywork的百分比;除非使用部分本地检查点,否则无效。在ndb 7.6.5中添加。

  • LocationDomainId: Assign this API node to a specific availability domain or zone. 0 (default) leaves this unset. Added in NDB 7.6.4.

    locationdomainid:将此api节点分配给特定的可用性域或区域。0(默认)使此设置保持未设置状态。在ndb 7.6.4中添加。

  • LocationDomainId: Assign this management node to a specific availability domain or zone. 0 (default) leaves this unset. Added in NDB 7.6.4.

    locationdomainid:将此管理节点分配给特定的可用性域或区域。0(默认)使此设置保持未设置状态。在ndb 7.6.4中添加。

  • LocationDomainId: Assign this data node to a specific availability domain or zone. 0 (default) leaves this unset. Added in NDB 7.6.4.

    locationdomainid:将此数据节点分配给特定的可用性域或区域。0(默认)使此设置保持未设置状态。在ndb 7.6.4中添加。

  • MaxFKBuildBatchSize: Maximum scan batch size to use for building foreign keys. Increasing this value may speed up builds of foreign keys but impacts ongoing traffic as well. Added in NDB 7.6.4.

    MaxFKBuildBatchSize:用于构建外键的最大扫描批量大小。增加此值可能会加快外键的生成,但也会影响正在进行的通信量。在ndb 7.6.4中添加。

  • MaxReorgBuildBatchSize: Maximum scan batch size to use for reorganization of table partitions. Increasing this value may speed up table partition reorganization but impacts ongoing traffic as well. Added in NDB 7.6.4.

    MaxReorgBuildBatchSize:用于表分区重组的最大扫描批量大小。增加这个值可能会加速表分区重组,但也会影响正在进行的通信量。在ndb 7.6.4中添加。

  • MaxUIBuildBatchSize: Maximum scan batch size to use for building unique keys. Increasing this value may speed up builds of unique keys but impacts ongoing traffic as well. Added in NDB 7.6.4.

    MaxUIBuildBatchSize:用于构建唯一密钥的最大扫描批量大小。增加此值可能会加快唯一密钥的生成,但也会影响正在进行的通信量。在ndb 7.6.4中添加。

  • ODirectSyncFlag: O_DIRECT writes are treated as synchronized writes; ignored when ODirect is not enabled, InitFragmentLogFiles is set to SPARSE, or both. Added in NDB 7.6.4.

    OrdTyStCyFr旗旗鼓:OxDead写入被视为同步写入;忽略ODirect未启用时,InitFragmentLogFiles设置为稀疏,或两者兼而有之。在ndb 7.6.4中添加。

  • PreSendChecksum: If this parameter and Checksum are both enabled, perform pre-send checksum checks, and check all SHM signals between nodes for errors. Added in NDB 7.6.6.

    pre send checksum:如果此参数和校验和都已启用,则执行发送前校验和检查,并检查节点之间的所有shm信号是否有错误。在ndb 7.6.6中添加。

  • PreSendChecksum: If this parameter and Checksum are both enabled, perform pre-send checksum checks, and check all TCP signals between nodes for errors. Added in NDB 7.6.6.

    pre send checksum:如果此参数和校验和都已启用,则执行发送前校验和检查,并检查节点之间的所有TCP信号是否有错误。在ndb 7.6.6中添加。

  • RecoveryWork: Percentage of storage overhead for LCP files: greater value means less work in normal operations, more work during recovery. Added in NDB 7.6.4.

    RecoveryWork:LCP文件的存储开销百分比:值越大,表示正常操作中的工作越少,恢复期间的工作越多。在ndb 7.6.4中添加。

  • SendBufferMemory: Bytes in shared memory buffer for signals sent from this node. Added in NDB 7.6.6.

    sendbuffermemory:共享内存缓冲区中用于从该节点发送信号的字节数。在ndb 7.6.6中添加。

  • ShmSpinTime: When receiving, number of microseconds to spin before sleeping. Added in NDB 7.6.6.

    shmspintime:接收时,睡眠前旋转的微秒数。在ndb 7.6.6中添加。

  • UseShm: Use shared memory connections between this data node and API node also running on this host. Added in NDB 7.6.6.

    useshm:使用此数据节点和也在此主机上运行的api节点之间的共享内存连接。在ndb 7.6.6中添加。

  • WatchDogImmediateKill: When true, threads are immediately killed whenever watchdog issues occur; used for testing and debugging. Added in NDB 7.6.7.

    watchdoginmediatekill:如果为true,则每当出现watchdog问题时,线程都会立即被终止;用于测试和调试。在ndb 7.6.7中添加。

Node Configuration Parameters Deprecated in NDB 7.6

The following node configuration parameters have been deprecated in NDB 7.6.

以下节点配置参数在ndb 7.6中已被弃用。

  • BackupDataBufferSize: Default size of databuffer for a backup (in bytes). Deprecated as of NDB 7.6.4.

    backUpdateBufferSize:备份的默认数据缓冲区大小(字节)。自ndb 7.6.4起已弃用。

  • BackupMaxWriteSize: Maximum size of file system writes made by backup (in bytes). Deprecated as of NDB 7.6.4.

    BuffUpMax WrutsId:由备份(以字节为单位)编写的文件系统写入的最大大小。自ndb 7.6.4起已弃用。

  • BackupWriteSize: Default size of file system writes made by backup (in bytes). Deprecated as of NDB 7.6.4.

    backupwritesize:备份文件系统写入的默认大小(字节)。自ndb 7.6.4起已弃用。

  • IndexMemory: Number of bytes on each data node allocated for storing indexes; subject to available system RAM and size of DataMemory. Deprecated as of NDB 7.6.2.

    IndexMemory:每个数据节点上分配用于存储索引的字节数;取决于可用的系统RAM和数据内存的大小。自ndb 7.6.2起已弃用。

  • Signum: Signal number to be used for signalling. Deprecated as of NDB 7.6.6.

    signum:用于发信号的信号号。自ndb 7.6.6起已弃用。

Node Configuration Parameters Removed in NDB 7.6

No node configuration parameters have been removed from NDB 7.6.

没有从ndb 7.6中删除节点配置参数。

MySQL Server Options and Variables Introduced in NDB 7.6

The following mysqld system variables, status variables, and options have been added in NDB 7.6.

在ndb 7.6中添加了以下mysqld系统变量、状态变量和选项。

  • Ndb_system_name: Configured cluster system name; empty if server not connected to NDB. Added in NDB 7.6.2.

    ndb_system_name:已配置群集系统名称;如果服务器未连接到ndb,则为空。在ndb 7.6.2中添加。

  • ndb-log-update-minimal: Log updates in a minimal format.. Added in NDB 7.6.3.

    最小ndb日志更新:以最小格式更新日志。在ndb 7.6.3中添加。

  • ndb_row_checksum: When enabled, set row checksums; enabled by default. Added in NDB 7.6.8.

    ndb_row_checksum:启用时,设置行校验和;默认启用。在ndb 7.6.8中添加。

MySQL Server Options and Variables Deprecated in NDB 7.6

No system variables, status variables, or options have been deprecated in NDB 7.6.

在ndb 7.6中,没有系统变量、状态变量或选项被弃用。

MySQL Server Options and Variables Removed in NDB 7.6

No system variables, status variables, or options have been removed from NDB 7.6.

没有从ndb 7.6中删除系统变量、状态变量或选项。

21.1.6 MySQL Server Using InnoDB Compared with NDB Cluster

MySQL Server offers a number of choices in storage engines. Since both NDB and InnoDB can serve as transactional MySQL storage engines, users of MySQL Server sometimes become interested in NDB Cluster. They see NDB as a possible alternative or upgrade to the default InnoDB storage engine in MySQL 5.7. While NDB and InnoDB share common characteristics, there are differences in architecture and implementation, so that some existing MySQL Server applications and usage scenarios can be a good fit for NDB Cluster, but not all of them.

mysql服务器在存储引擎中提供了许多选择。由于ndb和innodb都可以作为事务性的mysql存储引擎,mysql服务器的用户有时会对ndb集群感兴趣。他们认为ndb是一种可能的替代方案,或者升级到mysql 5.7中默认的innodb存储引擎。虽然NDB和NYNDB具有共同的特性,但在体系结构和实现方面存在差异,因此一些现有的MySQL服务器应用程序和使用场景可以很好地适合NDB集群,但不是全部。

In this section, we discuss and compare some characteristics of the NDB storage engine used by NDB 7.5 with InnoDB used in MySQL 5.7. The next few sections provide a technical comparison. In many instances, decisions about when and where to use NDB Cluster must be made on a case-by-case basis, taking all factors into consideration. While it is beyond the scope of this documentation to provide specifics for every conceivable usage scenario, we also attempt to offer some very general guidance on the relative suitability of some common types of applications for NDB as opposed to InnoDB back ends.

在本节中,我们将讨论并比较ndb 7.5使用的ndb存储引擎和mysql 5.7使用的innodb的一些特性。接下来的几节提供了一个技术比较。在许多情况下,关于何时何地使用ndb集群的决定必须在个案基础上作出,同时考虑到所有因素。虽然为每个可能的使用场景提供细节超出了本文档的范围,但我们也试图提供一些非常一般的指导,说明ndb的一些常见类型应用程序相对于innodb后端的相对适用性。

NDB Cluster 7.5 uses a mysqld based on MySQL 5.7, including support for InnoDB 1.1. While it is possible to use InnoDB tables with NDB Cluster, such tables are not clustered. It is also not possible to use programs or libraries from an NDB Cluster 7.5 distribution with MySQL Server 5.7, or the reverse.

ndb cluster 7.5使用基于mysql 5.7的mysqld,包括对innodb 1.1的支持。虽然可以将innodb表与ndb cluster一起使用,但这些表不是集群的。也不可能将ndb cluster 7.5发行版中的程序或库与mysql server 5.7一起使用,反之亦然。

While it is also true that some types of common business applications can be run either on NDB Cluster or on MySQL Server (most likely using the InnoDB storage engine), there are some important architectural and implementation differences. Section 21.1.6.1, “Differences Between the NDB and InnoDB Storage Engines”, provides a summary of the these differences. Due to the differences, some usage scenarios are clearly more suitable for one engine or the other; see Section 21.1.6.2, “NDB and InnoDB Workloads”. This in turn has an impact on the types of applications that better suited for use with NDB or InnoDB. See Section 21.1.6.3, “NDB and InnoDB Feature Usage Summary”, for a comparison of the relative suitability of each for use in common types of database applications.

虽然某些类型的通用业务应用程序可以在ndb集群或mysql服务器上运行(很可能使用innodb存储引擎),但在体系结构和实现上也存在一些重要的差异。第21.1.6.1节,“ndb和innodb存储引擎之间的差异”,总结了这些差异。由于这些差异,一些使用场景显然更适合于一个引擎或另一个引擎;请参见第21.1.6.2节“ndb和innodb工作负载”。这反过来又会对更适合与ndb或innodb一起使用的应用程序类型产生影响。参见第21.1.6.3节“ndb和innodb特性使用摘要”,以比较它们在常见数据库应用程序类型中的相对适用性。

For information about the relative characteristics of the NDB and MEMORY storage engines, see When to Use MEMORY or NDB Cluster.

有关ndb和内存存储引擎的相关特性的信息,请参阅何时使用内存或ndb集群。

See Chapter 15, Alternative Storage Engines, for additional information about MySQL storage engines.

有关mysql存储引擎的更多信息,请参阅第15章,替代存储引擎。

21.1.6.1 Differences Between the NDB and InnoDB Storage Engines

The NDB storage engine is implemented using a distributed, shared-nothing architecture, which causes it to behave differently from InnoDB in a number of ways. For those unaccustomed to working with NDB, unexpected behaviors can arise due to its distributed nature with regard to transactions, foreign keys, table limits, and other characteristics. These are shown in the following table:

ndb存储引擎是使用一个分布式的、无共享的体系结构实现的,这使得它在许多方面的行为与innodb不同。对于那些不习惯使用ndb的人,由于其在事务、外键、表限制和其他特性方面的分布式性质,可能会出现意外行为。如下表所示:

Table 21.1 Differences between InnoDB and NDB storage engines

表21.1 InnoDB和NDB存储引擎之间的差异

Feature InnoDB (MySQL 5.7) NDB 7.5/7.6
MySQL Server Version 5.7 5.7
InnoDB Version InnoDB 5.7.29 InnoDB 5.7.29
NDB Cluster Version N/A NDB 7.5.16/7.6.12
Storage Limits 64TB 128TB (as of NDB 7.5.2)
Foreign Keys Yes Yes
Transactions All standard types READ COMMITTED
MVCC Yes No
Data Compression Yes No (NDB checkpoint and backup files can be compressed)
Large Row Support (> 14K) Supported for VARBINARY, VARCHAR, BLOB, and TEXT columns Supported for BLOB and TEXT columns only (Using these types to store very large amounts of data can lower NDB performance)
Replication Support Asynchronous and semisynchronous replication using MySQL Replication; MySQL Group Replication Automatic synchronous replication within an NDB Cluster; asynchronous replication between NDB Clusters, using MySQL Replication (Semisynchronous replication is not supported)
Scaleout for Read Operations Yes (MySQL Replication) Yes (Automatic partitioning in NDB Cluster; NDB Cluster Replication)
Scaleout for Write Operations Requires application-level partitioning (sharding) Yes (Automatic partitioning in NDB Cluster is transparent to applications)
High Availability (HA) Built-in, from InnoDB cluster Yes (Designed for 99.999% uptime)
Node Failure Recovery and Failover From MySQL Group Replication Automatic (Key element in NDB architecture)
Time for Node Failure Recovery 30 seconds or longer Typically < 1 second
Real-Time Performance No Yes
In-Memory Tables No Yes (Some data can optionally be stored on disk; both in-memory and disk data storage are durable)
NoSQL Access to Storage Engine Yes Yes (Multiple APIs, including Memcached, Node.js/JavaScript, Java, JPA, C++, and HTTP/REST)
Concurrent and Parallel Writes Yes Up to 48 writers, optimized for concurrent writes
Conflict Detection and Resolution (Multiple Replication Masters) Yes (MySQL Group Replication) Yes
Hash Indexes No Yes
Online Addition of Nodes Read/write replicas using MySQL Group Replication Yes (all node types)
Online Upgrades Yes (using replication) Yes
Online Schema Modifications Yes, as part of MySQL 5.7 Yes

21.1.6.2 NDB and InnoDB Workloads

NDB Cluster has a range of unique attributes that make it ideal to serve applications requiring high availability, fast failover, high throughput, and low latency. Due to its distributed architecture and multi-node implementation, NDB Cluster also has specific constraints that may keep some workloads from performing well. A number of major differences in behavior between the NDB and InnoDB storage engines with regard to some common types of database-driven application workloads are shown in the following table::

ndb集群具有一系列独特的属性,使其非常适合于需要高可用性、快速故障转移、高吞吐量和低延迟的应用程序。由于其分布式架构和多节点实现,ndb集群也有特定的限制,可能会使一些工作负载无法很好地执行。下表显示了ndb和innodb存储引擎在一些常见类型的数据库驱动应用程序工作负载方面的一些主要行为差异:

Table 21.2 Differences between InnoDB and NDB storage engines, common types of data-driven application workloads.

表21.2 InnoDB和NDB存储引擎之间的差异,数据驱动应用程序工作负载的常见类型。

Workload InnoDB NDB Cluster (NDB)
High-Volume OLTP Applications Yes Yes
DSS Applications (data marts, analytics) Yes Limited (Join operations across OLTP datasets not exceeding 3TB in size)
Custom Applications Yes Yes
Packaged Applications Yes Limited (should be mostly primary key access); NDB Cluster 7.5 supports foreign keys
In-Network Telecoms Applications (HLR, HSS, SDP) No Yes
Session Management and Caching Yes Yes
E-Commerce Applications Yes Yes
User Profile Management, AAA Protocol Yes Yes

21.1.6.3 NDB and InnoDB Feature Usage Summary

When comparing application feature requirements to the capabilities of InnoDB with NDB, some are clearly more compatible with one storage engine than the other.

当将应用程序特性需求与innodb和ndb的功能进行比较时,有些显然更适合于一个存储引擎。

The following table lists supported application features according to the storage engine to which each feature is typically better suited.

下表根据存储引擎列出了支持的应用程序功能,每个功能通常更适合存储引擎。

Table 21.3 Supported application features according to the storage engine to which each feature is typically better suited

表21.3根据存储引擎支持的应用程序功能,每个功能通常更好地适合于存储引擎

Preferred application requirements for InnoDB Preferred application requirements for NDB
  • Foreign keys

    外键

    Note

    NDB Cluster 7.5 supports foreign keys

    ndb cluster 7.5支持外键

  • Full table scans

    全表扫描

  • Very large databases, rows, or transactions

    非常大的数据库、行或事务

  • Transactions other than READ COMMITTED

    已提交读取以外的事务

  • Write scaling

    写缩放

  • 99.999% uptime

    99.999%正常运行时间

  • Online addition of nodes and online schema operations

    节点的在线添加和在线模式操作

  • Multiple SQL and NoSQL APIs (see NDB Cluster APIs: Overview and Concepts)

    多个sql和nosql api(参见ndb cluster api:概述和概念)

  • Real-time performance

    实时性能

  • Limited use of BLOB columns

    blob列的有限使用

  • Foreign keys are supported, although their use may have an impact on performance at high throughput

    虽然外键的使用可能会对高吞吐量的性能产生影响,但仍支持外键


21.1.7 Known Limitations of NDB Cluster

In the sections that follow, we discuss known limitations in current releases of NDB Cluster as compared with the features available when using the MyISAM and InnoDB storage engines. If you check the Cluster category in the MySQL bugs database at http://bugs.mysql.com, you can find known bugs in the following categories under MySQL Server: in the MySQL bugs database at http://bugs.mysql.com, which we intend to correct in upcoming releases of NDB Cluster:

在接下来的部分中,我们将讨论ndb集群当前版本中的已知限制,并与使用myisam和innodb存储引擎时可用的功能进行比较。如果您在http://bugs.mysql.com的mysql错误数据库中检查“cluster”类别,您可以在http://bugs.mysql.com的mysql错误数据库中的“mysql server:”下面的类别中找到已知的错误,我们打算在即将发布的ndb cluster中更正这些错误:

  • NDB Cluster

    存储引擎

  • Cluster Direct API (NDBAPI)

    集群直接api(ndbapi)

  • Cluster Disk Data

    群集磁盘数据

  • Cluster Replication

    群集复制

  • ClusterJ

    集群

This information is intended to be complete with respect to the conditions just set forth. You can report any discrepancies that you encounter to the MySQL bugs database using the instructions given in Section 1.7, “How to Report Bugs or Problems”. If we do not plan to fix the problem in NDB Cluster 7.5, we will add it to the list.

本信息旨在就刚刚提出的条件而言是完整的。您可以使用1.7节“如何报告错误或问题”中给出的说明报告遇到的任何与mysql错误数据库的差异。如果我们不打算修复ndb集群7.5中的问题,我们将把它添加到列表中。

See Previous NDB Cluster Issues Resolved in NDB Cluster 7.3 for a list of issues in earlier releases that have been resolved in NDB Cluster 7.5.

有关已在ndb cluster 7.5中解决的早期版本中的问题列表,请参阅ndb cluster 7.3中以前解决的ndb集群问题。

Note

Limitations and other issues specific to NDB Cluster Replication are described in Section 21.6.3, “Known Issues in NDB Cluster Replication”.

第21.6.3节“ndb群集复制中的已知问题”中描述了ndb群集复制特有的限制和其他问题。

21.1.7.1 Noncompliance with SQL Syntax in NDB Cluster

Some SQL statements relating to certain MySQL features produce errors when used with NDB tables, as described in the following list:

与某些mysql特性相关的一些sql语句在与ndb表一起使用时会产生错误,如下表所示:

  • Temporary tables.  Temporary tables are not supported. Trying either to create a temporary table that uses the NDB storage engine or to alter an existing temporary table to use NDB fails with the error Table storage engine 'ndbcluster' does not support the create option 'TEMPORARY'.

    临时表。不支持临时表。尝试创建一个临时表,该表使用不可修改的存储引擎或更改现有的临时表,以使用错误。

  • Indexes and keys in NDB tables.  Keys and indexes on NDB Cluster tables are subject to the following limitations:

    ndb表中的索引和键。ndb集群表上的键和索引受以下限制:

    • Column width.  Attempting to create an index on an NDB table column whose width is greater than 3072 bytes succeeds, but only the first 3072 bytes are actually used for the index. In such cases, a warning Specified key was too long; max key length is 3072 bytes is issued, and a SHOW CREATE TABLE statement shows the length of the index as 3072.

      列宽。尝试在宽度大于3072字节的ndb表列上创建索引成功,但实际上只有前3072字节用于索引。在这种情况下,警告指定的键太长;发出的最大键长度为3072字节,并且show create table语句将索引的长度显示为3072。

    • TEXT and BLOB columns.  You cannot create indexes on NDB table columns that use any of the TEXT or BLOB data types.

      文本和blob列。不能对使用任何文本或blob数据类型的ndb表列创建索引。

    • FULLTEXT indexes.  The NDB storage engine does not support FULLTEXT indexes, which are possible for MyISAM and InnoDB tables only.

      全文索引。ndb存储引擎不支持全文索引,这仅适用于myisam和innodb表。

      However, you can create indexes on VARCHAR columns of NDB tables.

      但是,可以在ndb表的varchar列上创建索引。

    • USING HASH keys and NULL.  Using nullable columns in unique keys and primary keys means that queries using these columns are handled as full table scans. To work around this issue, make the column NOT NULL, or re-create the index without the USING HASH option.

      使用哈希键和空。在唯一键和主键中使用可为空的列意味着使用这些列的查询将作为完整表扫描处理。若要解决此问题,请使列不为空,或在不使用“使用哈希”选项的情况下重新创建索引。

    • Prefixes.  There are no prefix indexes; only entire columns can be indexed. (The size of an NDB column index is always the same as the width of the column in bytes, up to and including 3072 bytes, as described earlier in this section. Also see Section 21.1.7.6, “Unsupported or Missing Features in NDB Cluster”, for additional information.)

      前缀。没有前缀索引;只能索引整个列。(如本节前面所述,ndb列索引的大小始终与列的宽度(字节)相同,最多3072字节,包括3072字节)。另请参阅21.1.7.6节“ndb集群中不支持或缺少的功能”,了解更多信息。)

    • BIT columns.  A BIT column cannot be a primary key, unique key, or index, nor can it be part of a composite primary key, unique key, or index.

      位列。位列不能是主键、唯一键或索引,也不能是组合主键、唯一键或索引的一部分。

    • AUTO_INCREMENT columns.  Like other MySQL storage engines, the NDB storage engine can handle a maximum of one AUTO_INCREMENT column per table. However, in the case of an NDB table with no explicit primary key, an AUTO_INCREMENT column is automatically defined and used as a hidden primary key. For this reason, you cannot define a table that has an explicit AUTO_INCREMENT column unless that column is also declared using the PRIMARY KEY option. Attempting to create a table with an AUTO_INCREMENT column that is not the table's primary key, and using the NDB storage engine, fails with an error.

      自动递增列。与其他MySQL存储引擎一样,NDB存储引擎可以处理每个表最大的一个AutoYuffuffic列。但是,如果ndb表没有显式主键,则会自动定义自动递增列,并将其用作“隐藏”主键。因此,不能定义具有显式自动递增列的表,除非该列也使用主键选项声明。尝试使用不是表主键的auto_increment列创建表并使用ndb存储引擎时失败,并出现错误。

  • Restrictions on foreign keys.  Support for foreign key constraints in NDB 7.5 is comparable to that provided by InnoDB, subject to the following restrictions:

    外键限制。ndb 7.5中对外键约束的支持与innodb提供的支持类似,但受到以下限制:

    • Every column referenced as a foreign key requires an explicit unique key, if it is not the table's primary key.

      如果不是表的主键,则作为外键引用的每个列都需要显式唯一键。

    • ON UPDATE CASCADE is not supported when the reference is to the parent table's primary key.

      当引用是父表的主键时,不支持on update cascade。

      This is because an update of a primary key is implemented as a delete of the old row (containing the old primary key) plus an insert of the new row (with a new primary key). This is not visible to the NDB kernel, which views these two rows as being the same, and thus has no way of knowing that this update should be cascaded.

      这是因为主键的更新是通过删除旧行(包含旧主键)和插入新行(使用新主键)来实现的。对于ndb内核来说,这是不可见的,它将这两行视为相同的,因此无法知道应该级联此更新。

    • As of NDB 7.5.14 and NDB 7.6.10: ON DELETE CASCADE is not supported where the child table contains one or more columns of any of the TEXT or BLOB types. (Bug #89511, Bug #27484882)

      从ndb 7.5.14到ndb7.6.10:如果子表包含任何文本或blob类型的一列或多列,则不支持on delete cascade。(错误89511,错误27484882)

    • SET DEFAULT is not supported. (Also not supported by InnoDB.)

      不支持设置默认值。(InnoDB也不支持。)

    • The NO ACTION keywords are accepted but treated as RESTRICT. (Also the same as with InnoDB.)

      不动作关键字被接受,但被视为限制关键字。(与innodb相同。)

    • In earlier versions of NDB Cluster, when creating a table with foreign key referencing an index in another table, it sometimes appeared possible to create the foreign key even if the order of the columns in the indexes did not match, due to the fact that an appropriate error was not always returned internally. A partial fix for this issue improved the error used internally to work in most cases; however, it remains possible for this situation to occur in the event that the parent index is a unique index. (Bug #18094360)

      在早期版本的ndb cluster中,当使用外键引用另一个表中的索引创建表时,有时似乎可以创建外键,即使索引中列的顺序不匹配,因为在内部并不总是返回适当的错误。此问题的部分修复改进了在大多数情况下内部使用的错误;但是,如果父索引是唯一索引,则仍有可能发生这种情况。(错误18094360)

    • Prior to NDB 7.5.6, when adding or dropping a foreign key using ALTER TABLE, the parent table's metadata is not updated, which makes it possible subsequently to execute ALTER TABLE statements on the parent that should be invalid. To work around this issue, execute SHOW CREATE TABLE on the parent table immediately after adding or dropping the foreign key; this forces the parent's metadata to be reloaded.

      在ndb 7.5.6之前,当使用alter table添加或删除外键时,父表的元数据不会更新,这使得随后可以在应该无效的父表上执行alter table语句。若要解决此问题,请在添加或删除外键后立即在父表上执行show create table;这将强制重新加载父表的元数据。

      This issue is fixed in NDB 7.5.6. (Bug #82989, Bug #24666177)

      此问题已在ndb 7.5.6中修复。(错误82989,错误2466677)

    For more information, see Section 13.1.18.6, “Using FOREIGN KEY Constraints”, and Section 1.8.3.2, “FOREIGN KEY Constraints”.

    有关详细信息,请参见第13.1.18.6节“使用外键约束”和第1.8.3.2节“外键约束”。

  • NDB Cluster and geometry data types.  Geometry data types (WKT and WKB) are supported for NDB tables. However, spatial indexes are not supported.

    ndb集群和几何数据类型。ndb表支持几何数据类型(wkt和wkb)。但是,不支持空间索引。

  • Character sets and binary log files.  Currently, the ndb_apply_status and ndb_binlog_index tables are created using the latin1 (ASCII) character set. Because names of binary logs are recorded in this table, binary log files named using non-Latin characters are not referenced correctly in these tables. This is a known issue, which we are working to fix. (Bug #50226)

    字符集和二进制日志文件。目前,ndb_apply_status和ndb_binlog_索引表是使用latin1(ascii)字符集创建的。由于此表中记录了二进制日志的名称,因此这些表中未正确引用使用非拉丁字符命名的二进制日志文件。这是一个已知的问题,我们正在努力解决。(错误50226)

    To work around this problem, use only Latin-1 characters when naming binary log files or setting any the --basedir, --log-bin, or --log-bin-index options.

    若要解决此问题,请在命名二进制日志文件或设置任何--basedir、--log bin或--log bin索引选项时仅使用拉丁文-1字符。

  • Creating NDB tables with user-defined partitioning.  Support for user-defined partitioning in NDB Cluster is restricted to [LINEAR] KEY partitioning. Using any other partitioning type with ENGINE=NDB or ENGINE=NDBCLUSTER in a CREATE TABLE statement results in an error.

    使用用户定义的分区创建ndb表。ndb集群中对用户定义分区的支持仅限于[线性]键分区。在CREATE TABLE语句中使用ENGINE=ndb或ENGINE=ndbcluster的任何其他分区类型都会导致错误。

    It is possible to override this restriction, but doing so is not supported for use in production settings. For details, see User-defined partitioning and the NDB storage engine (NDB Cluster).

    可以覆盖此限制,但不支持在生产设置中使用。有关详细信息,请参阅用户定义分区和ndb存储引擎(ndb集群)。

    Default partitioning scheme.  All NDB Cluster tables are by default partitioned by KEY using the table's primary key as the partitioning key. If no primary key is explicitly set for the table, the hidden primary key automatically created by the NDB storage engine is used instead. For additional discussion of these and related issues, see Section 22.2.5, “KEY Partitioning”.

    默认分区方案。默认情况下,所有ndb集群表都是按键分区的,使用表的主键作为分区键。如果没有为表显式设置主键,则使用由ndb存储引擎自动创建的“隐藏”主键。有关这些问题和相关问题的更多讨论,请参见第22.2.5节“密钥分区”。

    CREATE TABLE and ALTER TABLE statements that would cause a user-partitioned NDBCLUSTER table not to meet either or both of the following two requirements are not permitted, and fail with an error:

    CREATE TABLE和ALTER TABLE语句将导致用户分区的ndbcluster表不满足以下两个要求中的任何一个或同时满足这两个要求,这是不允许的,并且会失败并出现错误:

    1. The table must have an explicit primary key.

      表必须具有显式主键。

    2. All columns listed in the table's partitioning expression must be part of the primary key.

      表的分区表达式中列出的所有列都必须是主键的一部分。

    Exception.  If a user-partitioned NDBCLUSTER table is created using an empty column-list (that is, using PARTITION BY [LINEAR] KEY()), then no explicit primary key is required.

    例外。如果使用空列列表(即,使用[Real[Krim]键())来创建用户分区的NDCBROCH表,则不需要显式主键。

    Maximum number of partitions for NDBCLUSTER tables.  The maximum number of partitions that can defined for a NDBCLUSTER table when employing user-defined partitioning is 8 per node group. (See Section 21.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”, for more information about NDB Cluster node groups.

    NdBaseCopyTabes的最大分区数。在使用用户定义分区时,可为每个节点组定义8个分区的最大分区数。(有关ndb群集节点组的详细信息,请参阅第21.1.2节“ndb群集节点、节点组、副本和分区”。

    DROP PARTITION not supported.  It is not possible to drop partitions from NDB tables using ALTER TABLE ... DROP PARTITION. The other partitioning extensions to ALTER TABLEADD PARTITION, REORGANIZE PARTITION, and COALESCE PARTITION—are supported for NDB tables, but use copying and so are not optimized. See Section 22.3.1, “Management of RANGE and LIST Partitions” and Section 13.1.8, “ALTER TABLE Syntax”.

    不支持删除分区。无法使用alter table从ndb表中删除分区…下拉分区。NDB表支持更改表添加分区、重新组织分区和合并分区的其他分区扩展,但使用复制并没有优化。参见第22.3.1节“范围和列表分区的管理”和第13.1.8节“更改表语法”。

  • Row-based replication.  When using row-based replication with NDB Cluster, binary logging cannot be disabled. That is, the NDB storage engine ignores the value of sql_log_bin.

    基于行的复制。对ndb集群使用基于行的复制时,不能禁用二进制日志记录。也就是说,ndb存储引擎忽略sql_log_bin的值。

  • JSON data type.  The MySQL JSON data type is supported for NDB tables in the mysqld supplied with NDB 7.5.2 and later.

    JSON数据类型。ndb 7.5.2及更高版本提供的mysqld中的ndb表支持mysql json数据类型。

    An NDB table can have a maximum of 3 JSON columns.

    NDB表最多可以有3个JSON列。

    The NDB API has no special provision for working with JSON data, which it views simply as BLOB data. Handling data as JSON must be performed by the application.

    ndb api没有处理json数据的特殊规定,它只是将其视为blob数据。作为json处理数据必须由应用程序执行。

  • CPU and thread info ndbinfo tables.  NDB 7.5.2 adds several new tables to the ndbinfo information database providing information about CPU and thread activity by node, thread ID, and thread type. The tables are listed here:

    CPU和线程信息ndbinfo表。ndb 7.5.2将几个新表添加到ndbinfo信息数据库中,按节点、线程id和线程类型提供有关cpu和线程活动的信息。表格如下:

    • cpustat: Provides per-second, per-thread CPU statistics

      cpustat:每秒提供每个线程的cpu统计信息

    • cpustat_50ms: Raw per-thread CPU statistics data, gathered every 50ms

      cpustat_50ms:每线程原始cpu统计数据,每50ms收集一次

    • cpustat_1sec: Raw per-thread CPU statistics data, gathered each second

      cpustat_1sec:每秒收集的每线程原始CPU统计数据

    • cpustat_20sec: Raw per-thread CPU statistics data, gathered every 20 seconds

      cpustat_20sec:每线程原始CPU统计数据,每20秒收集一次

    • threads: Names and descriptions of thread types

      螺纹:螺纹类型的名称和说明

    For more information about these tables, see Section 21.5.10, “ndbinfo: The NDB Cluster Information Database”.

    有关这些表的更多信息,请参阅21.5.10节,“ndbinfo:ndb集群信息数据库”。

  • Lock info ndbinfo tables.  NDB 7.5.3 adds new tables to the ndbinfo information database providing information about locks and lock attempts in a running NDB Cluster. These tables are listed here:

    锁定信息ndbinfo表。ndb 7.5.3将新表添加到ndbinfo信息数据库中,提供有关正在运行的ndb集群中的锁和锁尝试的信息。这些表格如下:

    • cluster_locks: Current lock requests which are waiting for or holding locks; this information can be useful when investigating stalls and deadlocks. Analogous to cluster_operations.

      群集锁:当前正在等待或持有锁的锁请求;此信息在调查暂停和死锁时非常有用。类似于群集操作。

    • locks_per_fragment: Counts of lock claim requests, and their outcomes per fragment, as well as total time spent waiting for locks successfully and unsuccessfully. Analogous to operations_per_fragment and memory_per_fragment.

      locks_per_fragment:锁声明请求的计数,每个片段的结果,以及成功和失败等待锁的总时间。类似于每个片段的操作和每个片段的内存。

    • server_locks: Subset of cluster transactions—those running on the local mysqld, showing a connection id per transaction. Analogous to server_operations.

      服务器锁:在本地mysqld上运行的集群事务的子集,显示每个事务的连接id。类似于服务器操作。

21.1.7.2 Limits and Differences of NDB Cluster from Standard MySQL Limits

In this section, we list limits found in NDB Cluster that either differ from limits found in, or that are not found in, standard MySQL.

在本节中,我们将列出在ndb集群中找到的限制,这些限制可能不同于在标准mysql中找到的限制,也可能不在标准mysql中找到。

Memory usage and recovery.  Memory consumed when data is inserted into an NDB table is not automatically recovered when deleted, as it is with other storage engines. Instead, the following rules hold true:

内存使用和恢复。数据插入到ndb表时消耗的内存在删除时不会自动恢复,与其他存储引擎一样。相反,以下规则适用:

  • A DELETE statement on an NDB table makes the memory formerly used by the deleted rows available for re-use by inserts on the same table only. However, this memory can be made available for general re-use by performing OPTIMIZE TABLE.

    ndb表上的delete语句使以前由已删除行使用的内存只能由同一表上的insert重用。但是,可以通过执行优化表使该内存可供一般重用。

    A rolling restart of the cluster also frees any memory used by deleted rows. See Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”.

    集群的滚动重新启动还释放了被删除行使用的任何内存。参见第21.5.5节“执行ndb集群的滚动重启”。

  • A DROP TABLE or TRUNCATE TABLE operation on an NDB table frees the memory that was used by this table for re-use by any NDB table, either by the same table or by another NDB table.

    对ndb表执行drop table或truncate table操作可释放此表使用的内存,供任何ndb表(同一个表或另一个ndb表)重用。

    Note

    Recall that TRUNCATE TABLE drops and re-creates the table. See Section 13.1.34, “TRUNCATE TABLE Syntax”.

    回想一下truncate table删除并重新创建表。参见第13.1.34节“截断表语法”。

  • Limits imposed by the cluster's configuration.  A number of hard limits exist which are configurable, but available main memory in the cluster sets limits. See the complete list of configuration parameters in Section 21.3.3, “NDB Cluster Configuration Files”. Most configuration parameters can be upgraded online. These hard limits include:

    群集配置所施加的限制。存在许多可配置的硬限制,但集群中可用的主内存设置了限制。请参阅21.3.3节“ndb群集配置文件”中配置参数的完整列表。大多数配置参数都可以在线升级。这些硬限制包括:

    • Database memory size and index memory size (DataMemory and IndexMemory, respectively).

      数据库内存大小和索引内存大小(分别为datamemory和index memory)。

      DataMemory is allocated as 32KB pages. As each DataMemory page is used, it is assigned to a specific table; once allocated, this memory cannot be freed except by dropping the table.

      数据内存被分配为32KB页。当使用每个datamemory页时,它被分配给一个特定的表;一旦分配,这个内存就不能被释放,除非删除表。

      See Section 21.3.3.6, “Defining NDB Cluster Data Nodes”, for more information.

      有关更多信息,请参阅第21.3.3.6节“定义ndb集群数据节点”。

    • The maximum number of operations that can be performed per transaction is set using the configuration parameters MaxNoOfConcurrentOperations and MaxNoOfLocalOperations.

      可以使用配置参数Max NoFunCurrand操作和Max NoFooLoCalp操作设置每个事务可以执行的最大操作数。

      Note

      Bulk loading, TRUNCATE TABLE, and ALTER TABLE are handled as special cases by running multiple transactions, and so are not subject to this limitation.

      批量加载、截断表和alter表都是通过运行多个事务作为特殊情况处理的,因此不受此限制。

    • Different limits related to tables and indexes. For example, the maximum number of ordered indexes in the cluster is determined by MaxNoOfOrderedIndexes, and the maximum number of ordered indexes per table is 16.

      与表和索引相关的不同限制。例如,集群中的有序索引的最大数量由Max NoFoordEdvices索引确定,每个表的有序索引的最大数量为16。

  • Node and data object maximums.  The following limits apply to numbers of cluster nodes and metadata objects:

    节点和数据对象最大值。以下限制适用于群集节点和元数据对象的数量:

    • The maximum number of data nodes is 48.

      数据节点的最大数目是48。

      A data node must have a node ID in the range of 1 to 48, inclusive. (Management and API nodes may use node IDs in the range 1 to 255, inclusive.)

      数据节点的节点ID必须介于1到48之间(包括1到48)。(管理节点和API节点可以使用1到255(包括1到255)范围内的节点ID。)

    • The total maximum number of nodes in an NDB Cluster is 255. This number includes all SQL nodes (MySQL Servers), API nodes (applications accessing the cluster other than MySQL servers), data nodes, and management servers.

      NDB集群中的总最大节点数为255。这个数字包括所有sql节点(mysql服务器)、api节点(访问集群的应用程序(mysql服务器除外)、数据节点和管理服务器。

    • The maximum number of metadata objects in current versions of NDB Cluster is 20320. This limit is hard-coded.

      NDB集群的当前版本中元数据对象的最大数量为20320。这个限制是硬编码的。

    See Previous NDB Cluster Issues Resolved in NDB Cluster 7.3, for more information.

    有关详细信息,请参阅ndb cluster 7.3中以前解决的ndb集群问题。

21.1.7.3 Limits Relating to Transaction Handling in NDB Cluster

A number of limitations exist in NDB Cluster with regard to the handling of transactions. These include the following:

NDB集群在事务处理方面存在许多限制。其中包括:

  • Transaction isolation level.  The NDBCLUSTER storage engine supports only the READ COMMITTED transaction isolation level. (InnoDB, for example, supports READ COMMITTED, READ UNCOMMITTED, REPEATABLE READ, and SERIALIZABLE.) You should keep in mind that NDB implements READ COMMITTED on a per-row basis; when a read request arrives at the data node storing the row, what is returned is the last committed version of the row at that time.

    事务隔离级别。ndbcluster存储引擎仅支持读取提交的事务隔离级别。(例如,innodb支持read committed、read uncommitted、repeatable read和serializable。)您应该记住,ndb在每行的基础上实现read committed;当读请求到达存储该行的数据节点时,返回的是该行当时的最后一个提交版本。

    Uncommitted data is never returned, but when a transaction modifying a number of rows commits concurrently with a transaction reading the same rows, the transaction performing the read can observe before values, after values, or both, for different rows among these, due to the fact that a given row read request can be processed either before or after the commit of the other transaction.

    从未返回未提交的数据,但当修改多行的事务与读取相同行的事务同时提交时,执行读取的事务可以观察到其中不同行的“before”值、“after”值或两者,因为可以处理给定的行读取请求在提交其他事务之前或之后。

    To ensure that a given transaction reads only before or after values, you can impose row locks using SELECT ... LOCK IN SHARE MODE. In such cases, the lock is held until the owning transaction is committed. Using row locks can also cause the following issues:

    为了确保给定事务只在值之前或之后读取,可以使用select…锁定在共享模式。在这种情况下,锁将一直保持到提交所属事务为止。使用行锁还可能导致以下问题:

    • Increased frequency of lock wait timeout errors, and reduced concurrency

      增加锁等待超时错误的频率,并减少并发性

    • Increased transaction processing overhead due to reads requiring a commit phase

      由于需要提交阶段的读取而增加了事务处理开销

    • Possibility of exhausting the available number of concurrent locks, which is limited by MaxNoOfConcurrentOperations

      耗尽可用并发锁数的可能性,这受到maxNoofConcurrentOperations的限制

    NDB uses READ COMMITTED for all reads unless a modifier such as LOCK IN SHARE MODE or FOR UPDATE is used. LOCK IN SHARE MODE causes shared row locks to be used; FOR UPDATE causes exclusive row locks to be used. Unique key reads have their locks upgraded automatically by NDB to ensure a self-consistent read; BLOB reads also employ extra locking for consistency.

    ndb对所有读取使用read committed,除非使用诸如lock in share mode或for update之类的修饰符。共享模式下的锁导致使用共享行锁;更新导致使用排他行锁。唯一密钥读取会由ndb自动升级其锁,以确保自一致性读取;blob读取还会使用额外的锁来保持一致性。

    See Section 21.5.3.4, “NDB Cluster Backup Troubleshooting”, for information on how NDB Cluster's implementation of transaction isolation level can affect backup and restoration of NDB databases.

    有关ndb集群执行事务隔离级别如何影响ndb数据库备份和恢复的信息,请参阅第21.5.3.4节“ndb集群备份故障排除”。

  • Transactions and BLOB or TEXT columns.  NDBCLUSTER stores only part of a column value that uses any of MySQL's BLOB or TEXT data types in the table visible to MySQL; the remainder of the BLOB or TEXT is stored in a separate internal table that is not accessible to MySQL. This gives rise to two related issues of which you should be aware whenever executing SELECT statements on tables that contain columns of these types:

    事务和blob或文本列。ndbcluster只在mysql可见的表中存储使用mysql的blob或文本数据类型的列值的一部分;blob或文本的其余部分存储在mysql无法访问的单独内部表中。这会导致两个相关问题,在对包含这些类型的列的表执行select语句时,应注意这些问题:

    1. For any SELECT from an NDB Cluster table: If the SELECT includes a BLOB or TEXT column, the READ COMMITTED transaction isolation level is converted to a read with read lock. This is done to guarantee consistency.

      对于ndb集群表中的任何select:如果select包含blob或text列,则read committed事务隔离级别将转换为read-lock读取。这样做是为了保证一致性。

    2. For any SELECT which uses a unique key lookup to retrieve any columns that use any of the BLOB or TEXT data types and that is executed within a transaction, a shared read lock is held on the table for the duration of the transaction—that is, until the transaction is either committed or aborted.

      对于使用唯一键查找来检索使用任何blob或文本数据类型且在事务中执行的任何列的任何select,在事务的持续时间(即,在提交或中止事务之前)表上会保留共享读取锁。

      This issue does not occur for queries that use index or table scans, even against NDB tables having BLOB or TEXT columns.

      对于使用索引或表扫描的查询,即使对具有blob或文本列的ndb表,也不会出现此问题。

      For example, consider the table t defined by the following CREATE TABLE statement:

      例如,考虑由以下create table语句定义的表t:

      CREATE TABLE t (
          a INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
          b INT NOT NULL,
          c INT NOT NULL,
          d TEXT,
          INDEX i(b),
          UNIQUE KEY u(c)
      ) ENGINE = NDB,
      

      Either of the following queries on t causes a shared read lock, because the first query uses a primary key lookup and the second uses a unique key lookup:

      由于第一个查询使用主键查找,而第二个查询使用唯一键查找,因此对t的以下查询之一会导致共享读锁:

      SELECT * FROM t WHERE a = 1;
      
      SELECT * FROM t WHERE c = 1;
      

      However, none of the four queries shown here causes a shared read lock:

      但是,这里显示的四个查询都不会导致共享读取锁定:

      SELECT * FROM t WHERE b = 1;
      
      SELECT * FROM t WHERE d = '1';
      
      SELECT * FROM t;
      
      SELECT b,c WHERE a = 1;
      

      This is because, of these four queries, the first uses an index scan, the second and third use table scans, and the fourth, while using a primary key lookup, does not retrieve the value of any BLOB or TEXT columns.

      这是因为,在这四个查询中,第一个查询使用索引扫描,第二个和第三个查询使用表扫描,第四个查询在使用主键查找时不检索任何blob或文本列的值。

      You can help minimize issues with shared read locks by avoiding queries that use unique key lookups that retrieve BLOB or TEXT columns, or, in cases where such queries are not avoidable, by committing transactions as soon as possible afterward.

      通过避免使用检索blob或文本列的唯一键查找的查询,或者在无法避免此类查询的情况下,通过随后尽快提交事务,可以帮助最小化共享读锁的问题。

  • Rollbacks.  There are no partial transactions, and no partial rollbacks of transactions. A duplicate key or similar error causes the entire transaction to be rolled back.

    回滚。不存在部分事务,也不存在事务的部分回滚。重复的密钥或类似错误会导致整个事务回滚。

    This behavior differs from that of other transactional storage engines such as InnoDB that may roll back individual statements.

    这种行为与其他事务性存储引擎(如innodb)的行为不同,后者可能回滚单个语句。

  • Transactions and memory usage.  As noted elsewhere in this chapter, NDB Cluster does not handle large transactions well; it is better to perform a number of small transactions with a few operations each than to attempt a single large transaction containing a great many operations. Among other considerations, large transactions require very large amounts of memory. Because of this, the transactional behavior of a number of MySQL statements is affected as described in the following list:

    事务和内存使用。如本章其他部分所述,ndb集群不能很好地处理大型事务;与其尝试包含大量操作的单个大型事务,不如执行多个小事务,每个事务都有几个操作。在其他考虑因素中,大型事务需要非常大的内存。因此,许多mysql语句的事务行为会受到影响,如下表所示:

    • TRUNCATE TABLE is not transactional when used on NDB tables. If a TRUNCATE TABLE fails to empty the table, then it must be re-run until it is successful.

      truncate table在用于ndb表时不是事务性的。如果truncate表未能清空该表,则必须重新运行该表,直到成功为止。

    • DELETE FROM (even with no WHERE clause) is transactional. For tables containing a great many rows, you may find that performance is improved by using several DELETE FROM ... LIMIT ... statements to chunk the delete operation. If your objective is to empty the table, then you may wish to use TRUNCATE TABLE instead.

      delete from(即使没有where子句)是事务性的。对于包含大量行的表,您可能会发现使用几个delete from可以提高性能。限制…语句来“分块”删除操作。如果您的目标是清空表,那么您可能希望改用truncate table。

    • LOAD DATA statements.  LOAD DATA is not transactional when used on NDB tables.

      加载数据语句。当在ndb表上使用时,加载数据不是事务性的。

      Important

      When executing a LOAD DATA statement, the NDB engine performs commits at irregular intervals that enable better utilization of the communication network. It is not possible to know ahead of time when such commits take place.

      执行LOAD DATA语句时,ndb引擎以不规则的间隔执行提交,以提高通信网络的利用率。不可能提前知道这种犯罪何时发生。

    • ALTER TABLE and transactions.  When copying an NDB table as part of an ALTER TABLE, the creation of the copy is nontransactional. (In any case, this operation is rolled back when the copy is deleted.)

      更改表和事务。当复制ndb表作为alter表的一部分时,副本的创建是非事务性的。(在任何情况下,删除副本时都会回滚此操作。)

  • Transactions and the COUNT() function.  When using NDB Cluster Replication, it is not possible to guarantee the transactional consistency of the COUNT() function on the slave. In other words, when performing on the master a series of statements (INSERT, DELETE, or both) that changes the number of rows in a table within a single transaction, executing SELECT COUNT(*) FROM table queries on the slave may yield intermediate results. This is due to the fact that SELECT COUNT(...) may perform dirty reads, and is not a bug in the NDB storage engine. (See Bug #31321 for more information.)

    事务和count()函数。使用ndb群集复制时,无法保证从机上count()函数的事务一致性。换言之,在主机上执行一系列语句(insert、delete或both)以更改单个事务中表中的行数时,在从机上执行表查询中的select count(*)可能会产生中间结果。这是因为select count(…)可能执行脏读,而不是ndb存储引擎中的错误。(有关详细信息,请参见Bug 31321。)

21.1.7.4 NDB Cluster Error Handling

Starting, stopping, or restarting a node may give rise to temporary errors causing some transactions to fail. These include the following cases:

启动、停止或重新启动节点可能会导致临时错误,导致某些事务失败。其中包括以下情况:

  • Temporary errors.  When first starting a node, it is possible that you may see Error 1204 Temporary failure, distribution changed and similar temporary errors.

    临时错误。第一次启动节点时,可能会看到错误1204临时故障、分发已更改和类似的临时错误。

  • Errors due to node failure.  The stopping or failure of any data node can result in a number of different node failure errors. (However, there should be no aborted transactions when performing a planned shutdown of the cluster.)

    节点故障导致的错误。任何数据节点的停止或故障都可能导致许多不同的节点故障错误。(但是,在执行计划关闭群集时,不应中止事务。)

In either of these cases, any errors that are generated must be handled within the application. This should be done by retrying the transaction.

在这两种情况下,必须在应用程序中处理生成的任何错误。这应该通过重试事务来完成。

See also Section 21.1.7.2, “Limits and Differences of NDB Cluster from Standard MySQL Limits”.

另请参见21.1.7.2节,“ndb集群与标准mysql限制的限制和区别”。

21.1.7.5 Limits Associated with Database Objects in NDB Cluster

Some database objects such as tables and indexes have different limitations when using the NDBCLUSTER storage engine:

某些数据库对象(如表和索引)在使用ndbcluster存储引擎时有不同的限制:

  • Database and table names.  When using the NDB storage engine, the maximum allowed length both for database names and for table names is 63 characters. A statement using a database name or table name longer than this limit fails with an appropriate error.

    数据库和表名。在使用NDB存储引擎时,数据库名称和表名的最大允许长度为63个字符。使用超过此限制的数据库名或表名的语句将失败,并出现相应的错误。

  • Number of database objects.  The maximum number of all NDB database objects in a single NDB Cluster—including databases, tables, and indexes—is limited to 20320.

    数据库对象数。包括数据库、表和索引在内的单个NDB集群中所有NDB数据库对象的最大数量限制为20320个。

  • Attributes per table.  The maximum number of attributes (that is, columns and indexes) that can belong to a given table is 512.

    每个表的属性。可以属于给定表的属性(即,列和索引)的最大数目是512。

  • Attributes per key.  The maximum number of attributes per key is 32.

    每个键的属性。每个密钥的最大属性数是32个。

  • Row size.  The maximum permitted size of any one row is 14000 bytes.

    行大小。任何一行的最大允许大小是14000字节。

    Each BLOB or TEXT column contributes 256 + 8 = 264 bytes to this total; see String Type Storage Requirements, for more information relating to these types.

    每个blob或text列都会为此总计贡献256+8=264个字节;有关这些类型的更多信息,请参阅字符串类型存储要求。

    In addition, the maximum offset for a fixed-width column of an NDB table is 8188 bytes; attempting to create a table that violates this limitation fails with NDB error 851 Maximum offset for fixed-size columns exceeded. For memory-based columns, you can work around this limitation by using a variable-width column type such as VARCHAR or defining the column as COLUMN_FORMAT=DYNAMIC; this does not work with columns stored on disk. For disk-based columns, you may be able to do so by reordering one or more of the table's disk-based columns such that the combined width of all but the disk-based column defined last in the CREATE TABLE statement used to create the table does not exceed 8188 bytes, less any possible rounding performed for some data types such as CHAR or VARCHAR; otherwise it is necessary to use memory-based storage for one or more of the offending column or columns instead.

    此外,NDB表的固定宽度列的最大偏移量为8188字节;试图创建违反此限制的表以NDB错误851失败,超出了固定大小列的最大偏移量。对于基于内存的列,可以通过使用可变宽度的列类型(如varchar)或将列定义为column_format=dynamic来解决此限制;这不适用于存储在磁盘上的列。对于基于磁盘的列,您可以通过重新排序表的一个或多个基于磁盘的列来完成此操作,以便用于创建表的CREATE TABLE语句中最后定义的除基于磁盘的列之外的所有列的组合宽度都不超过8188字节,减去对某些数据类型(如char或varchar)执行的任何可能的舍入;否则,必须对一个或多个有问题的列使用基于内存的存储。

  • BIT column storage per table.  The maximum combined width for all BIT columns used in a given NDB table is 4096.

    每个表的位列存储。给定NDB表中使用的所有位列的最大组合宽度为4096。

  • FIXED column storage.  NDB Cluster 7.5 and later supports a maximum of 128 TB per fragment of data in FIXED columns. (Previously, this was 16 GB.)

    固定柱存储。NDB集群7.5和以后支持在固定列中每个数据片段最多128个TB。(以前,这是16 GB。)

21.1.7.6 Unsupported or Missing Features in NDB Cluster

A number of features supported by other storage engines are not supported for NDB tables. Trying to use any of these features in NDB Cluster does not cause errors in or of itself; however, errors may occur in applications that expects the features to be supported or enforced. Statements referencing such features, even if effectively ignored by NDB, must be syntactically and otherwise valid.

ndb表不支持其他存储引擎支持的许多功能。尝试在ndb集群中使用这些功能不会导致自身错误;但是,在希望支持或实施这些功能的应用程序中可能会发生错误。引用这些特性的语句,即使被ndb有效忽略,也必须在语法上和其他方面有效。

  • Index prefixes.  Prefixes on indexes are not supported for NDB tables. If a prefix is used as part of an index specification in a statement such as CREATE TABLE, ALTER TABLE, or CREATE INDEX, the prefix is not created by NDB.

    索引前缀。ndb表不支持索引上的前缀。如果前缀用作语句(如CREATE TABLE、ALTER TABLE或CREATE INDEX)中索引规范的一部分,则前缀不是由ndb创建的。

    A statement containing an index prefix, and creating or modifying an NDB table, must still be syntactically valid. For example, the following statement always fails with Error 1089 Incorrect prefix key; the used key part isn't a string, the used length is longer than the key part, or the storage engine doesn't support unique prefix keys, regardless of storage engine:

    包含索引前缀并创建或修改ndb表的语句在语法上必须仍然有效。例如,以下语句总是失败,错误为1089前缀键不正确;使用的密钥部分不是字符串,使用的长度比密钥部分长,或者存储引擎不支持唯一的前缀键,无论存储引擎如何:

    CREATE TABLE t1 (
        c1 INT NOT NULL,
        c2 VARCHAR(100),
        INDEX i1 (c2(500))
    );

    This happens on account of the SQL syntax rule that no index may have a prefix larger than itself.

    这是由于sql语法规则,即任何索引的前缀都不能大于其本身。

  • Savepoints and rollbacks.  Savepoints and rollbacks to savepoints are ignored as in MyISAM.

    保存点和回滚。保存点和对保存点的回滚在myisam中被忽略。

  • Durability of commits.  There are no durable commits on disk. Commits are replicated, but there is no guarantee that logs are flushed to disk on commit.

    承诺的持久性。磁盘上没有持久提交。提交将被复制,但不能保证日志在提交时刷新到磁盘。

  • Replication.  Statement-based replication is not supported. Use --binlog-format=ROW (or --binlog-format=MIXED) when setting up cluster replication. See Section 21.6, “NDB Cluster Replication”, for more information.

    复制。不支持基于语句的复制。设置群集复制时使用--binlog format=row(或--binlog format=mixed)。有关详细信息,请参阅21.6节“NDB群集复制”。

    Replication using global transaction identifiers (GTIDs) is not compatible with NDB Cluster, and is not supported in NDB Cluster 7.5 or NDB CLuster 7.6. Do not enable GTIDs when using the NDB storage engine, as this is very likely to cause problems up to and including failure of NDB Cluster Replication.

    使用全局事务标识符(gtid)的复制与ndb群集不兼容,并且在ndb群集7.5或ndb群集7.6中不受支持。使用ndb存储引擎时不要启用gtids,因为这很可能会导致ndb群集复制失败(包括失败)。

    Semisynchronous replication is not supported in NDB Cluster.

    ndb群集不支持半同步复制。

  • Generated columns.  The NDB storage engine does not support indexes on virtual generated columns.

    生成的列。ndb存储引擎不支持虚拟生成列上的索引。

    As with other storage engines, you can create an index on a stored generated column, but you should bear in mind that NDB uses DataMemory for storage of the generated column as well as IndexMemory for the index. See JSON columns and indirect indexing in NDB Cluster, for an example.

    与其他存储引擎一样,您可以在存储的生成列上创建索引,但您应该记住,ndb使用datamemory存储生成列,并使用indexmemory存储索引。举一个例子,在NDB集群中看到JSON列和间接索引。

    NDB Cluster writes changes in stored generated columns to the binary log, but does log not those made to virtual columns. This should not effect NDB Cluster Replication or replication between NDB and other MySQL storage engines.

    ndb cluster将存储的生成列中的更改写入二进制日志,但不记录对虚拟列所做的更改。这不应影响ndb集群复制或ndb与其他mysql存储引擎之间的复制。

Note

See Section 21.1.7.3, “Limits Relating to Transaction Handling in NDB Cluster”, for more information relating to limitations on transaction handling in NDB.

有关ndb中事务处理限制的更多信息,请参见第21.1.7.3节“ndb集群中与事务处理相关的限制”。

21.1.7.7 Limitations Relating to Performance in NDB Cluster

The following performance issues are specific to or especially pronounced in NDB Cluster:

以下性能问题是特定于或在ndb集群中特别明显的:

  • Range scans.  There are query performance issues due to sequential access to the NDB storage engine; it is also relatively more expensive to do many range scans than it is with either MyISAM or InnoDB.

    范围扫描。由于对ndb存储引擎的顺序访问,存在查询性能问题;与myisam或innodb相比,执行许多范围扫描的成本也相对较高。

  • Reliability of Records in range.  The Records in range statistic is available but is not completely tested or officially supported. This may result in nonoptimal query plans in some cases. If necessary, you can employ USE INDEX or FORCE INDEX to alter the execution plan. See Section 8.9.4, “Index Hints”, for more information on how to do this.

    范围内记录的可靠性。范围统计中的记录是可用的,但没有完全测试或官方支持。在某些情况下,这可能会导致非最优查询计划。如果需要,可以使用use index或force index来更改执行计划。有关如何执行此操作的详细信息,请参见第8.9.4节“索引提示”。

  • Unique hash indexes.  Unique hash indexes created with USING HASH cannot be used for accessing a table if NULL is given as part of the key.

    唯一哈希索引。如果键的一部分为空,则使用哈希创建的唯一哈希索引不能用于访问表。

21.1.7.8 Issues Exclusive to NDB Cluster

The following are limitations specific to the NDB storage engine:

以下是特定于ndb存储引擎的限制:

  • Machine architecture.  All machines used in the cluster must have the same architecture. That is, all machines hosting nodes must be either big-endian or little-endian, and you cannot use a mixture of both. For example, you cannot have a management node running on a PowerPC which directs a data node that is running on an x86 machine. This restriction does not apply to machines simply running mysql or other clients that may be accessing the cluster's SQL nodes.

    机器架构。群集中使用的所有计算机必须具有相同的体系结构。也就是说,承载节点的所有计算机都必须是大端或小端,并且不能同时使用这两者。例如,不能让管理节点在powerpc上运行,而powerpc会指示在x86计算机上运行的数据节点。此限制不适用于仅运行MySQL或其他可能正在访问群集SQL节点的客户端的计算机。

  • Binary logging.  NDB Cluster has the following limitations or restrictions with regard to binary logging:

    二进制日志记录。对于二进制日志记录,ndb集群有以下限制:

  • Schema operations (DDL statements) are rejected while any data node restarts.

    架构操作(DDL语句)在任何数据节点重新启动时被拒绝。

  • Number of replicas.  The number of replicas, as determined by the NoOfReplicas data node configuration parameter, is the number of copies of all data stored by NDB Cluster. Setting this parameter to 1 means there is only a single copy; in this case, no redundancy is provided, and the loss of a data node entails loss of data. To guarantee redundancy, and thus preservation of data even if a data node fails, set this parameter to 2, which is the default and recommended value in production.

    副本数。noofreplicas data node configuration参数确定的副本数是ndb集群存储的所有数据的副本数。将此参数设置为1意味着只有一个副本;在这种情况下,不提供冗余,数据节点的丢失会导致数据丢失。为了保证冗余,从而即使数据节点出现故障也能保存数据,请将此参数设置为2,这是生产中的默认值和建议值。

    Setting NoOfReplicas to a value greater than 2 is possible (to a maximum of 4) but unnecessary to guard against loss of data. In addition, values greater than 2 for this parameter are not supported in production.

    将NOF副本设置为大于2的值是可能的(最多为4),但不需要防止数据丢失。此外,生产中不支持此参数的值大于2。

See also Section 21.1.7.10, “Limitations Relating to Multiple NDB Cluster Nodes”.

另见第21.1.7.10节,“与多个ndb集群节点相关的限制”。

21.1.7.9 Limitations Relating to NDB Cluster Disk Data Storage

Disk Data object maximums and minimums.  Disk data objects are subject to the following maximums and minimums:

磁盘数据对象最大值和最小值。磁盘数据对象具有以下最大值和最小值:

  • Maximum number of tablespaces: 232 (4294967296)

    表空间的最大数目:232(4294967296)

  • Maximum number of data files per tablespace: 216 (65536)

    每个表空间的最大数据文件数:216(65536)

  • The minimum and maximum possible sizes of extents for tablespace data files are 32K and 2G, respectively. See Section 13.1.19, “CREATE TABLESPACE Syntax”, for more information.

    表空间数据文件的最小和最大可能的扩展大小分别为32 K和2G。有关详细信息,请参阅第13.1.19节“创建表空间语法”。

In addition, when working with NDB Disk Data tables, you should be aware of the following issues regarding data files and extents:

此外,在使用ndb磁盘数据表时,应注意以下有关数据文件和数据块的问题:

  • Data files use DataMemory. Usage is the same as for in-memory data.

    数据文件使用数据内存。用法与内存中的数据相同。

  • Data files use file descriptors. It is important to keep in mind that data files are always open, which means the file descriptors are always in use and cannot be re-used for other system tasks.

    数据文件使用文件描述符。必须记住,数据文件总是打开的,这意味着文件描述符总是在使用,不能再用于其他系统任务。

  • Extents require sufficient DiskPageBufferMemory; you must reserve enough for this parameter to account for all memory used by all extents (number of extents times size of extents).

    扩展数据块需要足够的diskpagebuffermemory;您必须保留足够的内存,以便此参数能够占用所有扩展数据块使用的所有内存(扩展数据块数乘以扩展数据块大小)。

Disk Data tables and diskless mode.  Use of Disk Data tables is not supported when running the cluster in diskless mode.

磁盘数据表和无盘模式。以无盘模式运行群集时不支持使用磁盘数据表。

21.1.7.10 Limitations Relating to Multiple NDB Cluster Nodes

Multiple SQL nodes.  The following are issues relating to the use of multiple MySQL servers as NDB Cluster SQL nodes, and are specific to the NDBCLUSTER storage engine:

多个SQL节点。以下是使用多个mysql服务器作为ndb cluster sql节点的相关问题,这些问题特定于ndbcluster存储引擎:

  • No distributed table locks.  A LOCK TABLES works only for the SQL node on which the lock is issued; no other SQL node in the cluster sees this lock. This is also true for a lock issued by any statement that locks tables as part of its operations. (See next item for an example.)

    没有分布式表锁。锁表只适用于发出锁的sql节点;集群中没有其他sql节点“看到”此锁。对于任何将表锁定为其操作一部分的语句发出的锁,也是如此。(请参见下一项以获取示例。)

  • ALTER TABLE operations.  ALTER TABLE is not fully locking when running multiple MySQL servers (SQL nodes). (As discussed in the previous item, NDB Cluster does not support distributed table locks.)

    更改表操作。当运行多个mysql服务器(sql节点)时,alter table没有完全锁定。(如前一项所述,ndb集群不支持分布式表锁。)

Multiple management nodes.  When using multiple management servers:

多个管理节点。使用多个管理服务器时:

  • If any of the management servers are running on the same host, you must give nodes explicit IDs in connection strings because automatic allocation of node IDs does not work across multiple management servers on the same host. This is not required if every management server resides on a different host.

    如果任何管理服务器在同一主机上运行,则必须在连接字符串中为节点提供显式ID,因为节点ID的自动分配在同一主机上的多个管理服务器上不起作用。如果每个管理服务器都位于不同的主机上,则不需要这样做。

  • When a management server starts, it first checks for any other management server in the same NDB Cluster, and upon successful connection to the other management server uses its configuration data. This means that the management server --reload and --initial startup options are ignored unless the management server is the only one running. It also means that, when performing a rolling restart of an NDB Cluster with multiple management nodes, the management server reads its own configuration file if (and only if) it is the only management server running in this NDB Cluster. See Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”, for more information.

    当管理服务器启动时,它首先检查同一ndb集群中的任何其他管理服务器,并且在成功连接到其他管理服务器时使用其配置数据。这意味着除非管理服务器是唯一运行的服务器,否则将忽略管理服务器重新加载和初始启动选项。这还意味着,当对具有多个管理节点的ndb集群执行滚动重新启动时,如果(并且仅当)管理服务器是此ndb集群中运行的唯一管理服务器,则管理服务器将读取其自己的配置文件。有关更多信息,请参阅第21.5.5节“执行ndb集群的滚动重启”。

Multiple network addresses.  Multiple network addresses per data node are not supported. Use of these is liable to cause problems: In the event of a data node failure, an SQL node waits for confirmation that the data node went down but never receives it because another route to that data node remains open. This can effectively make the cluster inoperable.

多个网络地址。不支持每个数据节点有多个网络地址。使用这些方法可能会导致问题:在数据节点发生故障的情况下,sql节点会等待确认数据节点已关闭,但从未接收到它,因为到该数据节点的另一个路由仍处于打开状态。这可以有效地使集群不可操作。

Note

It is possible to use multiple network hardware interfaces (such as Ethernet cards) for a single data node, but these must be bound to the same address. This also means that it not possible to use more than one [tcp] section per connection in the config.ini file. See Section 21.3.3.10, “NDB Cluster TCP/IP Connections”, for more information.

可以为单个数据节点使用多个网络硬件接口(如以太网卡),但这些接口必须绑定到同一地址。这也意味着在config.ini文件中,每个连接不能使用多个[TCP]节。有关详细信息,请参阅21.3.3.10节,“NDB群集TCP/IP连接”。

21.2 NDB Cluster Installation

This section describes the basics for planning, installing, configuring, and running an NDB Cluster. Whereas the examples in Section 21.3, “Configuration of NDB Cluster” provide more in-depth information on a variety of clustering options and configuration, the result of following the guidelines and procedures outlined here should be a usable NDB Cluster which meets the minimum requirements for availability and safeguarding of data.

本节介绍规划、安装、配置和运行ndb集群的基础知识。尽管第21.3节“ndb集群的配置”中的示例提供了有关各种集群选项和配置的更深入的信息,但遵循此处概述的指南和过程的结果应该是满足数据可用性和保护最低要求的可用ndb集群。

For information about upgrading or downgrading an NDB Cluster between release versions, see Section 21.2.9, “Upgrading and Downgrading NDB Cluster”.

有关在不同版本之间升级或降级ndb群集的信息,请参阅第21.2.9节“升级和降级ndb群集”。

This section covers hardware and software requirements; networking issues; installation of NDB Cluster; basic configuration issues; starting, stopping, and restarting the cluster; loading of a sample database; and performing queries.

本节介绍硬件和软件要求;网络问题;ndb群集的安装;基本配置问题;启动、停止和重新启动群集;加载示例数据库;以及执行查询。

GUI installation.  NDB Cluster also provides the NDB Cluster Auto-Installer, a web-based graphical installer, as part of the NDB Cluster distribution. The Auto-Installer can be used to perform basic installation and setup of an NDB Cluster on one (for testing) or more host computers. The Auto-Installer was updated for NDB 7.6 and differs in many respects from the version found in NDB 7.5 and earlier. Section 21.2.1, “The NDB Cluster Auto-Installer (NDB 7.5)”, has information about the Auto-Installer for NDB 7.5; if you are using NDB 7.6, see Section 21.2.2, “The NDB Cluster Auto-Installer (NDB 7.6)”.

图形用户界面安装。ndb cluster还提供ndb cluster自动安装程序(一种基于web的图形安装程序),作为ndb cluster分发的一部分。自动安装程序可用于在一台(用于测试)或多台主机上执行ndb群集的基本安装和设置。自动安装程序是为ndb 7.6更新的,在许多方面与ndb7.5和更早版本有所不同。第21.2.1节“ndb群集自动安装程序(ndb 7.5)”包含有关ndb 7.5的自动安装程序的信息;如果使用ndb 7.6,请参阅第21.2.2节“ndb群集自动安装程序(ndb 7.6)”。

Assumptions.  The following sections make a number of assumptions regarding the cluster's physical and network configuration. These assumptions are discussed in the next few paragraphs.

假设。以下各节对集群的物理和网络配置进行了一些假设。下面几段将讨论这些假设。

Cluster nodes and host computers.  The cluster consists of four nodes, each on a separate host computer, and each with a fixed network address on a typical Ethernet network as shown here:

群集节点和主机。集群由四个节点组成,每个节点位于单独的主机上,每个节点在典型的以太网网络上具有固定的网络地址,如下所示:

Table 21.4 Network addresses of nodes in example cluster

表21.4示例集群中节点的网络地址

Node IP Address
Management node (mgmd) 198.51.100.10
SQL node (mysqld) 198.51.100.20
Data node "A" (ndbd) 198.51.100.30
Data node "B" (ndbd) 198.51.100.40

This setup is also shown in the following diagram:

此设置也显示在下图中:

Figure 21.4 NDB Cluster Multi-Computer Setup

图21.4 ndb集群多机设置

Most content is described in the surrounding text. The four nodes each connect to a central switch that connects to a network.

Network addressing.  In the interest of simplicity (and reliability), this How-To uses only numeric IP addresses. However, if DNS resolution is available on your network, it is possible to use host names in lieu of IP addresses in configuring Cluster. Alternatively, you can use the hosts file (typically /etc/hosts for Linux and other Unix-like operating systems, C:\WINDOWS\system32\drivers\etc\hosts on Windows, or your operating system's equivalent) for providing a means to do host lookup if such is available.

网络寻址。为了简单(和可靠性)起见,这种方法只使用数字IP地址。但是,如果DNS解析在您的网络上可用,则可以在配置群集时使用主机名代替IP地址。或者,您可以使用host s文件(通常是/etc/hosts,用于Linux和其他类似Unix的操作系统,Windows上的C:\windows\system32\drivers\etc\hosts,或您的操作系统的等效文件)来提供进行主机查找的方法(如果有)。

Potential hosts file issues.  A common problem when trying to use host names for Cluster nodes arises because of the way in which some operating systems (including some Linux distributions) set up the system's own host name in the /etc/hosts during installation. Consider two machines with the host names ndb1 and ndb2, both in the cluster network domain. Red Hat Linux (including some derivatives such as CentOS and Fedora) places the following entries in these machines' /etc/hosts files:

潜在的主机文件问题。在尝试为群集节点使用主机名时,会出现一个常见问题,这是因为在安装过程中,某些操作系统(包括某些Linux发行版)在/etc/host s中设置系统自己的主机名的方式。考虑两台主机名为ndb1和ndb2的计算机,它们都在群集网络域中。Red Hat Linux(包括一些衍生产品,如CentOS和Fedora)在这些计算机的/etc/hosts文件中放置以下条目:

#  ndb1 /etc/hosts:
127.0.0.1   ndb1.cluster ndb1 localhost.localdomain localhost
#  ndb2 /etc/hosts:
127.0.0.1   ndb2.cluster ndb2 localhost.localdomain localhost

SUSE Linux (including OpenSUSE) places these entries in the machines' /etc/hosts files:

suse linux(包括opensuse)将这些条目放在机器的/etc/hosts文件中:

#  ndb1 /etc/hosts:
127.0.0.1       localhost
127.0.0.2       ndb1.cluster ndb1
#  ndb2 /etc/hosts:
127.0.0.1       localhost
127.0.0.2       ndb2.cluster ndb2

In both instances, ndb1 routes ndb1.cluster to a loopback IP address, but gets a public IP address from DNS for ndb2.cluster, while ndb2 routes ndb2.cluster to a loopback address and obtains a public address for ndb1.cluster. The result is that each data node connects to the management server, but cannot tell when any other data nodes have connected, and so the data nodes appear to hang while starting.

在这两种情况下,ndb1都将ndb1.cluster路由到环回IP地址,但从dns获取ndb2.cluster的公用IP地址,而ndb2将ndb2.cluster路由到环回地址并获取ndb1.cluster的公用地址。结果是,每个数据节点都连接到管理服务器,但无法判断其他任何数据节点何时已连接,因此数据节点在启动时似乎挂起。

Caution

You cannot mix localhost and other host names or IP addresses in config.ini. For these reasons, the solution in such cases (other than to use IP addresses for all config.ini HostName entries) is to remove the fully qualified host names from /etc/hosts and use these in config.ini for all cluster hosts.

不能在config.ini中混合使用本地主机和其他主机名或IP地址。出于这些原因,在这种情况下(除了对所有config.ini主机名条目使用IP地址)的解决方案是从/etc/hosts中删除完全限定的主机名,并在config.ini中对所有群集主机使用这些主机名。

Host computer type.  Each host computer in our installation scenario is an Intel-based desktop PC running a supported operating system installed to disk in a standard configuration, and running no unnecessary services. The core operating system with standard TCP/IP networking capabilities should be sufficient. Also for the sake of simplicity, we also assume that the file systems on all hosts are set up identically. In the event that they are not, you should adapt these instructions accordingly.

主机类型。在我们的安装方案中,每台主机都是一台基于英特尔的台式机,运行以标准配置安装到磁盘的受支持操作系统,并且不运行不必要的服务。具有标准TCP/IP网络功能的核心操作系统应该足够了。此外,为了简单起见,我们还假设所有主机上的文件系统的设置都是相同的。如果不是,则应相应地修改这些说明。

Network hardware.  Standard 100 Mbps or 1 gigabit Ethernet cards are installed on each machine, along with the proper drivers for the cards, and that all four hosts are connected through a standard-issue Ethernet networking appliance such as a switch. (All machines should use network cards with the same throughput. That is, all four machines in the cluster should have 100 Mbps cards or all four machines should have 1 Gbps cards.) NDB Cluster works in a 100 Mbps network; however, gigabit Ethernet provides better performance.

网络硬件。每台机器上都安装了标准的100 Mbps或1千兆以太网卡,以及相应的网卡驱动程序,所有四台主机都通过标准的以太网联网设备(如交换机)连接。(所有机器应使用具有相同吞吐量的网卡。也就是说,群集中的所有四台计算机都应具有100 Mbps卡,或者所有四台计算机都应具有1 Gbps卡。)ndb群集在100 Mbps网络中工作;但是,千兆位以太网提供了更好的性能。

Important

NDB Cluster is not intended for use in a network for which throughput is less than 100 Mbps or which experiences a high degree of latency. For this reason (among others), attempting to run an NDB Cluster over a wide area network such as the Internet is not likely to be successful, and is not supported in production.

ndb集群不适用于吞吐量小于100mbps或经历高度延迟的网络。因此(除其他外),尝试在广域网(如Internet)上运行NDB群集不太可能成功,并且在生产中不受支持。

Sample data.  We use the world database which is available for download from the MySQL website (see https://dev.mysql.com/doc/index-other.html). We assume that each machine has sufficient memory for running the operating system, required NDB Cluster processes, and (on the data nodes) storing the database.

样本数据。我们使用可从mysql网站下载的world数据库(见https://dev.mysql.com/doc/index other.html)。我们假设每台机器都有足够的内存来运行操作系统、所需的ndb集群进程和(在数据节点上)存储数据库。

For general information about installing MySQL, see Chapter 2, Installing and Upgrading MySQL. For information about installation of NDB Cluster on Linux and other Unix-like operating systems, see Section 21.2.3, “Installation of NDB Cluster on Linux”. For information about installation of NDB Cluster on Windows operating systems, see Section 21.2.4, “Installing NDB Cluster on Windows”.

有关安装mysql的一般信息,请参阅2章,安装和升级mysql。有关在Linux和其他类Unix操作系统上安装NDB群集的信息,请参阅21.2.3节“在Linux上安装NDB群集”。有关在windows操作系统上安装ndb群集的信息,请参阅21.2.4节“在windows上安装ndb群集”。

For general information about NDB Cluster hardware, software, and networking requirements, see Section 21.1.3, “NDB Cluster Hardware, Software, and Networking Requirements”.

有关ndb集群硬件、软件和网络要求的一般信息,请参阅第21.1.3节“ndb集群硬件、软件和网络要求”。

21.2.1 The NDB Cluster Auto-Installer (NDB 7.5)

This section describes the web-based graphical configuration installer included as part of the NDB Cluster 7.5 distribution. If you are using NDB 7.6, see Section 21.2.2, “The NDB Cluster Auto-Installer (NDB 7.6)”, for information about the updated installer that is supplied with NDB 7.6 releases.

本节介绍作为ndb cluster 7.5发行版的一部分提供的基于web的图形配置安装程序。如果您使用的是ndb 7.6,请参阅21.2.2节“ndb群集自动安装程序(ndb7.6)”,以获取有关ndb7.6版本提供的更新安装程序的信息。

Topics discussed in the following sections include an overview of the installer and its parts, software and other requirements for running the installer, navigating the GUI, and using the installer to set up, start, or stop an NDB Cluster on one or more host computers.

以下各节讨论的主题包括安装程序及其部件的概述、运行安装程序、导航gui以及使用安装程序在一台或多台主机上设置、启动或停止ndb群集的软件和其他要求。

The NDB Cluster Auto-Installer is made up of two components. The front end is a GUI client implemented as a Web page that loads and runs in a standard Web browser such as Firefox or Microsoft Internet Explorer. The back end is a server process (ndb_setup.py) that runs on the local machine or on another host to which you have access.

ndb集群自动安装程序由两个组件组成。前端是一个gui客户机,实现为一个网页,可以在标准的web浏览器(如firefox或microsoftexnetexplorer)中加载和运行。后端是在本地计算机或您有权访问的其他主机上运行的服务器进程(ndb_setup.py)。

These two components (client and server) communicate with each other using standard HTTP requests and responses. The back end can manage NDB Cluster software programs on any host where the back end user has granted access. If the NDB Cluster software is on a different host, the back end relies on SSH for access, using the Paramiko library for executing commands remotely (see Section 21.2.1.1, “NDB Cluster Auto-Installer Requirements”).

这两个组件(客户机和服务器)使用标准的http请求和响应相互通信。后端可以在后端用户授予访问权限的任何主机上管理ndb群集软件程序。如果ndb集群软件位于不同的主机上,则后端依赖ssh进行访问,使用paramiko库远程执行命令(请参阅第21.2.1.1节“ndb集群自动安装程序要求”)。

21.2.1.1 NDB Cluster Auto-Installer Requirements

This section provides information on supported operating platforms and software, required software, and other prerequisites for running the NDB Cluster Auto-Installer.

本节提供有关支持的操作平台和软件、所需软件以及运行ndb群集自动安装程序的其他先决条件的信息。

Supported platforms.  The NDB Cluster Auto-Installer is available with most NDB 7.5.2 and later NDB Cluster distributions for recent versions of Linux, Windows, Solaris, and MacOS X. For more detailed information about platform support for NDB Cluster and the NDB Cluster Auto-Installer, see https://www.mysql.com/support/supportedplatforms/cluster.html.

支持的平台。ndb cluster auto installer可用于最新版本的linux、windows、solaris和macos x的大多数ndb 7.5.2和更高版本的ndb cluster发行版。有关ndb cluster和ndb cluster auto installer的平台支持的详细信息,请参阅https://www.mysql.com/support/supportedplatforms/cluster.html。

The NDB Cluster Auto-Installer is not supported with NDB 7.5.0 or 7.5.1 (Bug #79853, Bug #22502247).

ndb 7.5.0或7.5.1不支持ndb群集自动安装程序(错误79853,错误2250247)。

Supported Web browsers.  The Web-based installer is supported with recent versions of Firefox and Microsoft Internet Explorer. It should also work with recent versions of Opera, Safari, and Chrome, although we have not thoroughly tested for compability with these browsers.

支持的Web浏览器。最新版本的Firefox和Microsoft Internet Explorer支持基于Web的安装程序。它还应该适用于Opera、Safari和Chrome的最新版本,尽管我们还没有对这些浏览器的兼容性进行彻底的测试。

Required software—server.  The following software must be installed on the host where the Auto-Installer is run:

所需的软件服务器。必须在运行自动安装程序的主机上安装以下软件:

  • Python 2.6 or higher.  The Auto-Installer requires the Python interpreter and standard libraries. If these are not already installed on the system, you may be able to add them using the system's package manager. Otherwise, they can be downloaded from http://python.org/download/.

    Python2.6或更高版本。自动安装程序需要python解释器和标准库。如果系统上尚未安装这些程序,则可以使用系统的包管理器添加它们。否则,可以从http://python.org/download/下载它们。

  • Paramiko 1.7.7.1 or higher.  This is required to communicate with remote hosts using SSH. You can download it from http://www.lag.net/paramiko/. Paramiko may also be available from your system's package manager.

    paramiko 1.7.7.1或更高版本。这是使用ssh与远程主机通信所必需的。您可以从http://www.lag.net/paramiko/下载。paramiko也可以从系统的包管理器获得。

  • Pycrypto version 1.9 or higher.  This cryptography module is required by Paramiko. If it is not available using your system's package manage, you can download it from https://www.dlitz.net/software/pycrypto/.

    PyCrypto 1.9或更高版本。paramiko需要此加密模块。如果无法使用系统的包管理,可以从https://www.dlitz.net/software/pycrypto/下载。

All of the software in the preceding list is included in the Windows version of the configuration tool, and does not need to be installed separately.

前面列表中的所有软件都包含在配置工具的Windows版本中,不需要单独安装。

The Paramiko and Pycrypto libraries are required only if you intend to deploy NDB Cluster nodes on remote hosts, and are not needed if all nodes are on the same host where the installer is run.

仅当您打算在远程主机上部署ndb群集节点时,才需要paramiko和pycrypto库;如果所有节点都在运行安装程序的同一主机上,则不需要paramiko和pycrypto库。

Required software—remote hosts.  The only software required for remote hosts where you wish to deploy NDB Cluster nodes is the SSH server, which is usually installed by default on Linux and Solaris systems. Several alternatives are available for Windows; for an overview of these, see http://en.wikipedia.org/wiki/Comparison_of_SSH_servers.

所需的软件远程主机。在您希望部署ndb集群节点的远程主机上,唯一需要的软件是ssh服务器,它通常默认安装在linux和solaris系统上。Windows提供了几种替代方案;有关这些方案的概述,请参见http://en.wikipedia.org/wiki/comparison-of-ssh-u-servers。

An additional requirement when using multiple hosts is that it is possible to authenticate to any of the remote hosts using SSH and the proper keys or user credentials, as discussed in the next few paragraphs:

使用多个主机时的另一个要求是,可以使用ssh和适当的密钥或用户凭据对任何远程主机进行身份验证,如下几段所述:

Authentication and security.  Three basic security or authentication mechanisms for remote access are available to the Auto-Installer, which we list and describe here:

身份验证和安全。自动安装程序提供了三种用于远程访问的基本安全或身份验证机制,我们在此列出并描述:

  • SSH.  A secure shell connection is used to enable the back end to perform actions on remote hosts. For this reason, an SSH server must be running on the remote host. In addition, the operating system user running the installer must have access to the remote server, either with a user name and password, or by using public and private keys.

    宋承宪。安全外壳连接用于使后端能够在远程主机上执行操作。因此,ssh服务器必须在远程主机上运行。此外,运行安装程序的操作系统用户必须具有访问远程服务器的权限,可以使用用户名和密码,也可以使用公钥和私钥。

    Important

    You should never use the system root account for remote access, as this is extremely insecure. In addition, mysqld cannot normally be started by system root. For these and other reasons, you should provide SSH credentials for a regular user account on the target system, and not for system root. For more information about this issue, see Section 6.1.5, “How to Run MySQL as a Normal User”.

    千万不要使用系统根帐户进行远程访问,因为这是非常不安全的。此外,mysqld通常不能由系统根启动。出于这些和其他原因,您应该为目标系统上的常规用户帐户提供ssh凭据,而不是为系统根帐户提供ssh凭据。有关此问题的详细信息,请参阅6.1.5节“如何以普通用户身份运行mysql”。

  • HTTPS.  Remote communication between the Web browser front end and the back end is not encrypted by default, which means that information such as the user's SSH password is transmitted as cleartext that is readable to anyone. For communication from a remote client to be encrypted, the back end must have a certificate, and the front end must communicate with the back end using HTTPS rather than HTTP. Enabling HTTPS is accomplished most easily through issuing a self-signed certificate. Once the certificate is issued, you must make sure that it is used. You can do this by starting ndb_setup.py from the command line with the --use-https and --cert-file options.

    https。默认情况下,web浏览器前端和后端之间的远程通信不加密,这意味着用户的ssh密码等信息以明文形式传输,任何人都可以读取。要加密来自远程客户端的通信,后端必须有证书,前端必须使用https而不是http与后端通信。启用https最容易通过颁发自签名证书来实现。颁发证书后,必须确保使用了该证书。可以通过使用--use https和--cert文件选项从命令行启动ndb_setup.py来完成此操作。

  • Certificate-based authentication.  The back end ndb_setup.py process can execute commands on the local host as well as remote hosts. This means that anyone connecting to the back end can take charge of how commands are executed. To reject unwanted connections to the back end, a certificate may be required for authentication of the client. In this case, a certificate must be issued by the user, installed in the browser, and made available to the back end for authentication purposes. You can enact this requirement (together with or in place of password or key authentication) by starting ndb_setup.py with the --ca-certs-file option.

    基于证书的身份验证。后端ndb_setup.py进程可以在本地主机和远程主机上执行命令。这意味着任何连接到后端的人都可以负责命令的执行方式。若要拒绝到后端的不需要的连接,可能需要证书来验证客户端。在这种情况下,证书必须由用户颁发,安装在浏览器中,并提供给后端用于身份验证。您可以通过使用--ca certs file选项启动ndb_setup.py来制定此要求(与密码或密钥身份验证一起或代替密码或密钥身份验证)。

There is no need or requirement for secure authentication when the client browser is running on the same host as the Auto-Installer back end.

当客户端浏览器与自动安装程序后端在同一主机上运行时,不需要或不需要安全身份验证。

See also Section 21.5.12, “NDB Cluster Security Issues”, which discusses security considerations to take into account when deploying NDB Cluster, as well as Chapter 6, Security, for more general MySQL security information.

另请参见第21.5.12节“ndb cluster安全问题”,其中讨论了在部署ndb cluster时要考虑的安全问题,以及第6章“安全性”,以了解更多一般的mysql安全信息。

21.2.1.2 Using the NDB Cluster Auto-Installer

The NDB Cluster Auto-Installer consists of several pages, each corresponding to a step in the process used to configure and deploy an NDB Cluster, and listed here:

ndb cluster自动安装程序由多个页面组成,每个页面都对应于用于配置和部署ndb集群的过程中的一个步骤,如下所示:

  • Welcome: Begin using the Auto-Installer by choosing either to configure a new NDB Cluster, or to continue configuring an existing one.

    欢迎:开始使用自动安装程序,要么选择配置新的NDB集群,要么继续配置现有的NDB集群。

  • Define Cluster: Set basic information about the cluster as a whole, such as name, hosts, and load type. Here you can also set the SSH authentication type for accessing remote hosts, if needed.

    define cluster:设置集群整体的基本信息,如名称、主机和负载类型。如果需要,您还可以在这里设置用于访问远程主机的ssh身份验证类型。

  • Define Hosts: Identify the hosts where you intend to run NDB Cluster processes.

    define hosts:标识要在其中运行ndb集群进程的主机。

  • Define Processes: Assign one or more processes of a given type or types to each cluster host.

    定义进程:为每个群集主机分配一个或多个给定类型的进程。

  • Define Attributes: Set configuration attributes for processes or types of processes.

    定义属性:为进程或进程类型设置配置属性。

  • Deploy Cluster: Deploy the cluster with the configuration set previously; start and stop the deployed cluster.

    部署集群:使用之前设置的配置部署集群;启动和停止部署的集群。

The following sections describe in greater detail the purpose and function of each of these pages, in the order just listed.

以下各节按刚刚列出的顺序更详细地描述了这些页面的目的和功能。

Starting the NDB Cluster Auto-Installer

The Auto-Installer is provided together with the NDB Cluster software. (See Section 21.2, “NDB Cluster Installation”.) The present section explains how to start the installer. You can do by invoking the ndb_setup.py executable.

自动安装程序与ndb群集软件一起提供。(参见第21.2节“ndb集群安装”。)本节说明如何启动安装程序。可以通过调用ndb_setup.py可执行文件来完成。

Important

You should run the ndb_setup.py as a normal user; no special privileges are needed to do so. You should not run this program as the mysql user, or using the system root or Administrator account; doing so may cause the installation to fail.

您应该以普通用户的身份运行ndb_setup.py;这样做不需要特殊权限。不应以mysql用户身份运行此程序,也不应使用系统根或管理员帐户;这样做可能会导致安装失败。

ndb_setup.py is found in the bin within the NDB Cluster installation directory; a typical location might be /usr/local/mysql/bin on a Linux system or C:\Program Files\MySQL\MySQL Server 5.6\bin on a Windows system, but this can vary according to where the NDB Cluster software is installed on your system.

ndb_setup.py位于ndb cluster安装目录中的bin中;Linux系统上的典型位置可能是/usr/local/mysql/bin,Windows系统上的典型位置可能是C:\ Program Files\mysql\mysql server 5.6\bin,但这可能因系统上安装ndb群集软件的位置而异。

On Windows, you can also start the installer by running setup.bat in the NDB Cluster installation directory. When invoked from the command line, it accepts the same options as does ndb_setup.py.

在Windows上,也可以通过在ndb cluster安装目录中运行setup.bat来启动安装程序。从命令行调用时,它接受与ndb_setup.py相同的选项。

ndb_setup.py can be started with any of several options that affect its operation, but it is usually sufficient to allow the default settings be used, in which case you can start ndb_setup.py by either of the following two methods:

ndb_setup.py可以使用影响其操作的任意选项启动,但通常足以允许使用默认设置,在这种情况下,可以通过以下两种方法之一启动ndb_setup.py:

  1. Navigate to the NDB Cluster bin directory in a terminal and invoke it from the command line, without any additional arguments or options, like this:

    导航到终端中的ndb cluster bin目录,并从命令行调用它,而不使用任何其他参数或选项,如下所示:

    shell> ndb_setup
    

    This works regardless of operating platform.

    这与操作平台无关。

  2. Navigate to the NDB Cluster bin directory in a file browser (such Windows Explorer on Windows, or Konqueror, Dolphin, or Nautilus on Linux) and activate (usually by double-clicking) the ndb_setup.py file icon. This works on Windows, and should work with most common Linux desktops as well.

    在文件浏览器中导航到ndb cluster bin目录(如windows上的windows资源管理器,或linux上的konqueror、dolphin或nautilus),并激活(通常双击)ndb_setup.py文件图标。这可以在windows上运行,而且应该可以在大多数常见的linux桌面上运行。

    On Windows, you can also navigate to the NDB Cluster installation directory and activate the setup.bat file icon.

    在windows上,您还可以导航到ndb cluster安装目录并激活setup.bat文件图标。

In either case, once ndb_setup.py is invoked, the Auto-Installer's Welcome screen should open in the system's default Web browser.

在这两种情况下,一旦调用ndb_setup.py,自动安装程序的欢迎屏幕将在系统的默认web浏览器中打开。

In some cases, you may wish to use non-default settings for the installer, such as specifying a different port for the Auto-Installer's included Web server to run on, in which case you must invoke ndb_setup.py with one or more startup options with values overriding the necessary defaults. The same startup options can be used on Windows systems with the setup.bat file supplied for such platforms in the NDB Cluster software distribution. This can be done using the command line, but if you want or need to start the installer from a desktop or file browser while emplying one or more of these options, it is also possible to create a script or batch file containing the proper invocation, then to double-click its file icon in the file browser to start the installer. (On Linux systems, you might also need to make the script file executable first.) For information about advanced startup options for the NDB Cluster Auto-Installer, see Section 21.4.27, “ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster”.

在某些情况下,您可能希望对安装程序使用非默认设置,例如为自动安装程序包含的Web服务器指定一个不同的运行端口,在这种情况下,您必须使用一个或多个启动选项调用ndb_setup.py,其中的值将覆盖必要的默认值。在windows系统上也可以使用相同的启动选项,在ndb集群软件发行版中为此类平台提供setup.bat文件。这可以使用命令行完成,但如果您希望或需要在使用其中一个或多个选项时从桌面或文件浏览器启动安装程序,也可以创建包含正确调用的脚本或批处理文件,然后双击文件浏览器中的文件图标启动安装程序。(在Linux系统上,您可能还需要首先使脚本文件可执行。)有关ndb cluster自动安装程序的高级启动选项的信息,请参阅21.4.27节,“ndb_setup.py-为ndb cluster启动基于浏览器的自动安装程序”。

NDB Cluster Auto-Installer Welcome Screen

The Welcome screen is loaded in the default browser when ndb_setup.py is invoked, as shown here:

调用ndb_setup.py时,欢迎屏幕将加载在默认浏览器中,如下所示:

Figure 21.5 The NDB Cluster Auto-Installer Welcome screen (Closeup)

图21.5 ndb集群自动安装程序欢迎屏幕(特写)

Content is described in the surrounding text.

This screen provides the following two choices for entering the installer, one of which must be selected to continue:

此屏幕提供了以下两个输入安装程序的选项,必须选择其中一个才能继续:

  1. Create New NDB Cluster: Start the Auto-Installer with a completely new cluster to be set up and deployed.

    创建新的ndb集群:使用要设置和部署的全新集群启动自动安装程序。

  2. Continue Previous Cluster Configuration: Start the Auto-Installer at the same point where the previous session ended, with all previous settings preserved.

    继续上一个群集配置:在上一个会话结束的同一点启动自动安装程序,保留所有以前的设置。

The second option requires that the browser be able to access its cookies from the previous session, as these provide the mechanism by which configuration and other information generated during a session is stored. In other words, to continue the previous session with the Auto-Installer, you must use the same web browser running on the same host as you did for the previous session.

第二个选项要求浏览器能够从上一个会话访问其cookie,因为这些cookie提供了存储会话期间生成的配置和其他信息的机制。换句话说,要使用自动安装程序继续上一个会话,您必须使用与上一个会话在同一主机上运行的相同Web浏览器。

NDB Cluster Auto-Installer Define Cluster Screen

The Define Cluster screen is the first screen to appear following the choice made in the Welcome screen, and is used for setting general properties of the cluster. The layout of the Define Cluster screen is shown here:

define cluster屏幕是在欢迎屏幕中进行选择之后出现的第一个屏幕,用于设置集群的常规属性。define cluster屏幕的布局如下所示:

Figure 21.6 The NDB Cluster Auto-Installer Define Cluster screen

图21.6 ndb集群自动安装程序定义集群屏幕

Content is described in the surrounding text.

The Define Cluster screen allows you to set a number of general properties for the cluster, as described in this list:

define cluster屏幕允许您为集群设置许多常规属性,如以下列表所述:

  • Cluster name: A name that identifies the cluster. The default is MyCluster.

    群集名称:标识群集的名称。默认值是mycluster。

  • Host list: A comma-delimited list of one or more hosts where cluster processes should run. By default, this is 127.0.0.1. If you add remote hosts to the list, you must be able to connect to them using the SSH Credentials supplied.

    主机列表:一个或多个主机的逗号分隔列表,其中应运行群集进程。默认情况下,这是127.0.0.1。如果将远程主机添加到列表中,则必须能够使用提供的ssh凭据连接到它们。

  • Application type: Choose one of the following:

    应用程序类型:选择以下选项之一:

    1. Simple testing: Minimal resource usage for small-scale testing. This the default. Not intended for production environments.

      简单测试:用于小规模测试的最小资源使用。这是默认值。不适用于生产环境。

    2. Web: Maximize performance for the given hardware.

      Web:为给定硬件最大化性能。

    3. Real-time: Maximize performance while maximizing sensitivity to timeouts in order to minimize the time needed to detect failed cluster processes.

      实时:最大限度地提高性能,同时最大限度地提高对超时的敏感性,以尽量减少检测失败的集群进程所需的时间。

  • Write load: Choose a level for the anticipated number of writes for the cluster as a whole. You can choose any one of the following levels:

    写入负载:为整个群集的预期写入次数选择一个级别。您可以选择下列任一级别:

    1. Low: The expected load includes fewer than 100 write transactions for second.

      低:预期的负载包括少于100个秒的写事务。

    2. Medium: The expected load includes 100 to 1000 write transactions per second.

      中等:预期负载包括每秒100到1000个写事务。

    3. High: The expected load includes more than 1000 write transactions per second.

      高:预期负载包括每秒1000多个写事务。

  • SSH Credentials: Choose Key-Based SSH or enter User and Password credentials. The SSH key or a user name with password is required for connecting to any remote hosts specified in the Host list. By default, Key-Based SSH is selected, and the User and Password fields are blank.

    ssh凭据:选择基于密钥的ssh或输入用户和密码凭据。连接到主机列表中指定的任何远程主机都需要ssh密钥或带有密码的用户名。默认情况下,选中基于密钥的ssh,用户和密码字段为空。

NDB Cluster Auto-Installer Define Hosts Screen

The Define Hosts screen, shown here, provides a means of viewing and specifying several key properties of each cluster host:

此处显示的“定义主机”屏幕提供了查看和指定每个群集主机的几个关键属性的方法:

Figure 21.7 NDB Cluster Define Hosts screen

图21.7 ndb cluster define hosts屏幕

Content is described in the surrounding text.

The hosts currently entered are displayed in the grid with various pieces of information. You can add hosts by clicking the Add hosts button and entering a list of one or more comma-separated host names, IP addresses, or both (as when editing the host list on the Define Cluster screen).

当前输入的主机将显示在网格中,其中包含各种信息。可以通过单击“添加主机”按钮并输入一个或多个以逗号分隔的主机名、IP地址或两者的列表来添加主机(就像在“定义群集”屏幕上编辑主机列表一样)。

Similarly, you can remove one or more hosts using the button labelled Remove selected host(s). When you remove a host in this fashion, any process which was configured for that host is also removed.

类似地,可以使用标记为“删除选定主机”的按钮删除一个或多个主机。以这种方式删除主机时,也会删除为该主机配置的任何进程。

If Automatically get resource information for new hosts is checked in the Settings menu, the Auto-Installer attempts to retrieve the platform name, amount of memory, and number of CPU cores and to fill these in automatically. The status of this is displayed in the Resource info column. Fetching the information from remote hosts is not instantaneous and may take some time, particularly from remote hosts running Windows.

如果在“设置”菜单中选中“自动获取新主机的资源信息”,则自动安装程序将尝试检索平台名称、内存量和CPU核心数,并自动填充这些信息。此状态显示在“资源信息”列中。从远程主机获取信息不是即时的,可能需要一些时间,特别是从运行Windows的远程主机。

If the SSH user credentials on the Define Cluster screen are changed, the tool tries to refresh the hardware information from any hosts for which information is missing. However, if a given field has already been edited, the user-supplied information is not overwritten by any value fetched from that host.

如果define cluster屏幕上的ssh用户凭据发生更改,该工具将尝试从缺少信息的任何主机刷新硬件信息。但是,如果给定字段已被编辑,则从该主机获取的任何值不会覆盖用户提供的信息。

The hardware resource information, platform name, installation directory, and data directory can be edited by the user by clicking the corresponding cell in the grid, by selecting one or more hosts and clicking the button labelled Edit selected host(s). This causes a dialog box to appear, in which these fields can be edited, as shown here:

用户可以通过单击网格中的相应单元格、选择一个或多个主机并单击标记为“编辑选定主机”的按钮来编辑硬件资源信息、平台名称、安装目录和数据目录。这将出现一个对话框,可以在其中编辑这些字段,如下所示:

Figure 21.8 NDB Cluster Auto-Installer Edit Hosts dialog

图21.8 ndb集群自动安装程序编辑主机对话框

Content is described in the surrounding text.

When more than one host is selected, any edited values are applied to all selected hosts.

选择多个主机时,任何编辑的值都将应用于所有选定主机。

NDB Cluster Auto-Installer Define Processes Screen

The Define Processes screen, shown here, provides a way to assign NDB Cluster processes (nodes) to cluster hosts:

此处显示的“定义进程”屏幕提供了将ndb群集进程(节点)分配给群集主机的方法:

Figure 21.9 NDB Cluster Auto-Installer Define Processes dialog

图21.9 ndb集群自动安装程序定义进程对话框

Content is described in the surrounding text. The example process tree topology includes "Any host" and "127.0.0.1", as defined earlier. The 127.0.0.1 example includes the following processes: Management mode 1, API node 1, API node 2, API node 3, SQL node 1, SQL node 2, Multi threaded data node 1, and Multi threaded data node 2. This panel also includes "Add process" and "Delete process" buttons.

This screen contains a process tree showing cluster hosts and processes set up to run on each one, as well as a panel which displays information about the item currently selected in the tree.

此屏幕包含一个进程树,其中显示群集主机和设置为在每个主机上运行的进程,以及一个面板,其中显示有关树中当前选定项的信息。

When this screen is accessed for the first time for a given cluster, a default set of processes is defined for you, based on the number of hosts. If you later return to the Define Hosts screen, remove all hosts, and add new hosts, this also causes a new default set of processes to be defined.

当第一次访问给定集群的此屏幕时,将根据主机数量为您定义一组默认进程。如果稍后返回“定义主机”屏幕,删除所有主机并添加新主机,这也会导致定义新的默认进程集。

NDB Cluster processes are of the following types:

ndb群集进程的类型如下:

  • Management node.  Performs administrative tasks such as stopping individual data nodes, querying node and cluster status, and making backups. Executable: ndb_mgmd.

    管理节点。执行管理任务,如停止单个数据节点、查询节点和群集状态以及进行备份。可执行文件:ndb_mgmd。

  • Single-threaded data node.  Stores data and executes queries. Executable: ndbd.

    单线程数据节点。存储数据并执行查询。可执行文件:ndbd。

  • Multi threaded data node.  Stores data and executes queries with multiple worker threads executing in parallel. Executable: ndbmtd.

    多线程数据节点。存储数据并使用并行执行的多个工作线程执行查询。可执行文件:ndbmtd。

  • SQL node.  MySQL server for executing SQL queries against NDB. Executable: mysqld.

    SQL节点。用于对ndb执行sql查询的mysql服务器。可执行文件:mysqld。

  • API node.  A client accessing data in NDB by means of the NDB API or other low-level client API, rather than by using SQL. See MySQL NDB Cluster API Developer Guide, for more information.

    API节点。通过ndb api或其他低级客户端api而不是使用sql访问ndb中的数据的客户端。有关详细信息,请参阅mysql ndb cluster api developer guide。

For more information about process (node) types, see Section 21.1.1, “NDB Cluster Core Concepts”.

有关进程(节点)类型的更多信息,请参阅21.1.1节,“ndb集群核心概念”。

Processes shown in the tree are numbered sequentially by type, for each host—for example, SQL node 1, SQL node 2, and so on—to simplify identification.

树中显示的进程按类型顺序编号,用于每个主机,例如SQL节点1、SQL节点2等,以简化标识。

Each management node, data node, or SQL process must be assigned to a specific host, and is not allowed to run on any other host. An API node may be assigned to a single host, but this is not required. Instead, you can assign it to the special Any host entry which the tree also contains in addition to any other hosts, and which acts as a placeholder for processes that are allowed to run on any host. Only API processes may use this Any host entry.

每个管理节点、数据节点或SQL进程都必须分配给特定主机,并且不允许在任何其他主机上运行。可以将api节点分配给单个主机,但这不是必需的。相反,您可以将它分配给特殊的any host条目,除了任何其他主机之外,树还包含该条目,该条目充当允许在任何主机上运行的进程的占位符。只有api进程可以使用任何主机条目。

Adding processes.  To add a new process to a given host, either right-click that host's entry in the tree, then select the Add process popup when it appears, or select a host in the process tree, and press the Add process button below the process tree. Performing either of these actions opens the add process dialog, as shown here:

添加流程。若要将新进程添加到给定主机,请右键单击树中该主机的条目,然后在出现时选择“添加进程”弹出窗口,或在进程树中选择一个主机,然后按进程树下的“添加进程”按钮。执行这些操作之一将打开“添加进程”对话框,如下所示:

Figure 21.10 NDB Cluster Auto-Installer Add Process Dialog

图21.10 ndb集群自动安装程序添加过程对话框

Most content is described in the surrounding text. Shows a window titled "Add new process" with two options: "Select process type:" that shows a select box with "API node" selected, and "Enter process name:" with "API node 4" entered as plain text. Action buttons include "Cancel" and "Add".

Here you can select from among the available process types described earlier this section; you can also enter an arbitrary process name to take the place of the suggested value, if desired.

在这里,您可以从本节前面描述的可用进程类型中进行选择;如果需要,您还可以输入任意进程名称来代替建议的值。

Removing processes.  To delete a process, right-click on a process in the tree and select delete process from the pop up menu that appears, or select a process, then use the delete process button below the process tree.

正在删除进程。要删除流程,请右键单击树中的流程,然后从出现的弹出菜单中选择“删除流程”,或选择流程,然后使用流程树下的“删除流程”按钮。

When a process is selected in the process tree, information about that process is displayed in the information panel, where you can change the process name and possibly its type. Important: Currently, you can change a single-threaded data node (ndbd) to a multithreaded data node (ndbmtd), or the reverse, only; no other process type changes are allowed. If you want to make a change between any other process types, you must delete the original process first, then add a new process of the desired type.

在流程树中选择某个流程时,有关该流程的信息将显示在“信息”面板中,您可以在其中更改流程名称,并可能更改其类型。重要提示:目前,您只能将单线程数据节点(ndbd)更改为多线程数据节点(ndbmtd),或相反;不允许更改其他进程类型。如果要在任何其他流程类型之间进行更改,必须先删除原始流程,然后添加所需类型的新流程。

NDB Cluster Auto-Installer Define Attributes Screen

This screen has a layout similar to that of the Define Processes screen, including a process tree. Unlike that screen's tree, the Define Attributes process tree is organized by process or node type, with single-threaded and multithreaded data nodes considered to be of the same type for this purpose, in groups labelled Management Layer, Data Layer, SQL Layer, and API Layer. An information panel displays information regarding the item currently selected. The Define Attributes screen is shown here:

此屏幕的布局类似于“定义流程”屏幕的布局,包括流程树。与该屏幕的树不同,define attributes流程树是按流程或节点类型组织的,单线程和多线程数据节点被视为具有相同的类型,这些节点被标记为management layer、data layer、sql layer和api layer。信息面板显示有关当前选定项的信息。“定义属性”屏幕如下所示:

Figure 21.11 NDB Cluster Auto-Installer Define Attributes screen

图21.11 ndb集群自动安装程序定义属性屏幕

Content is described in the surrounding text.

The checkbox labelled Show advanced configuration, when checked, makes advanced options visible in the information pane. These options are set and used whether or not they are visible.

选中标记为“显示高级配置”的复选框时,将使高级选项在信息窗格中可见。无论是否可见,都会设置和使用这些选项。

You can edit attributes for a single process by selecting that process from the tree, or for all processes of the same type in the cluster by selecting one of the Layer folders. A per-process value set for a given attribute overrides any per-group setting for that attribute that would otherwise apply to the process in question. An example of such an information panel (for an SQL process) is shown here:

可以通过从树中选择单个进程来编辑该进程的属性,也可以通过选择一个层文件夹来编辑群集中同一类型的所有进程的属性。为给定属性设置的每进程值将覆盖该属性的任何每组设置,否则该设置将应用于所讨论的进程。这样的信息面板(对于SQL进程)的示例如下所示:

Figure 21.12 Define Attributes Detail With SQL Process Attributes

图21.12用sql过程属性定义属性细节

Most content is described in the surrounding text. SQL Node 1 is selected and displays property fields for "NodeId", "HostName", "DataDir", "Port", and "Socket". The "DataDir", "Port", and "Socket" rows include a green plus sign button on the right indicating that they can be edited.

For some of the attributes shown in the information panel, a button bearing a plus sign is displayed, which means that the value of the attribute can be overridden. This + button activates an input widget for the attribute, enabling you to change its value. When the value has been overridden, this button changes into a button showing an X, as shown here:

对于信息面板中显示的某些属性,将显示一个带有加号的按钮,这意味着可以覆盖该属性的值。此+按钮激活属性的输入小部件,使您能够更改其值。当值被重写时,此按钮将变为显示X的按钮,如下所示:

Figure 21.13 Define Attributes Detail, Overriding Attribute Default Value

图21.13定义属性细节,覆盖属性默认值

Most content is described in the surrounding text. Is like the previous image but with the green plus sign button was clicked and its entry can now be edited. The green plus sign was replaced with a red X.

Clicking the X button next to an attribute undoes any changes made to it; it immediately reverts to the predefined value.

单击属性旁边的x按钮将撤消对其所做的任何更改;它将立即恢复为预定义的值。

All configuration attributes have predefined values calculated by the installer, based such factors as host name, node ID, node type, and so on. In most cases, these values may be left as they are. If you are not familiar with it already, it is highly recommended that you read the applicable documentation before making changes to any of the attribute values. To make finding this information easier, each attribute name shown in the information panel is linked to its description in the online NDB Cluster documentation.

所有配置属性都具有由安装程序根据主机名、节点ID、节点类型等因素计算的预定义值。在大多数情况下,这些值可以保持原样。如果您还不熟悉它,强烈建议您在更改任何属性值之前阅读适用的文档。为了便于查找此信息,信息面板中显示的每个属性名都链接到联机ndb集群文档中的描述。

NDB Cluster Auto-Installer Deploy Cluster Screen

This screen allows you to perform the following tasks:

此屏幕允许您执行以下任务:

  • Review process startup commands and configuration files to be applied

    检查要应用的进程启动命令和配置文件

  • Distribute configuration files by creating any necessary files and directories on all cluster hosts—that is, deploy the cluster as presently configured

    通过在所有群集主机上创建任何必要的文件和目录来分发配置文件,即按当前配置部署群集

  • Start and stop the cluster

    启动和停止群集

The Deploy Cluster screen is shown here:

部署群集屏幕如下所示:

Figure 21.14 NDB Cluster Auto-Installer Deploy Cluster Configuration screen

图21.14 ndb集群自动安装程序部署集群配置屏幕

Content is described in the surrounding text.

Like the Define Attributes screen, this screen features a process tree which is organized by process type. Next to each process in the tree is a status icon indicating the current status of the process: connected (CONNECTED), starting (STARTING), running (STARTED), stopping (STOPPING), or disconnected (NO_CONTACT). The icon shows green if the process is connected or running; yellow if it is starting or stopping; red if the process is stopped or cannot be contacted by the management server.

与“定义属性”屏幕一样,此屏幕具有按流程类型组织的流程树。树中每个进程旁边都有一个状态图标,指示进程的当前状态:已连接(connected)、正在启动(starting)、正在运行(started)、正在停止(stopping)或已断开(no_contact)。如果进程已连接或正在运行,则图标显示为绿色;如果进程正在启动或停止,则图标显示为黄色;如果进程已停止或管理服务器无法联系,则图标显示为红色。

This screen also contains two information panels, one showing the startup command or commands needed to start the selected process. (For some processes, more than one command may be required—for example, if initialization is necessary.) The other panel shows the contents of the configuration file, if any, for the given process; currently, the management node process is only type of process having a configuration file. Other process types are configured using command-line parameters when starting the process, or by obtaining configuration information from the management nodes as needed in real time.

此屏幕还包含两个信息面板,一个显示启动命令或启动所选进程所需的命令。(对于某些进程,可能需要一个以上的命令,例如,如果需要初始化。)另一个面板显示给定进程的配置文件(如果有的话)的内容;当前,管理节点进程只是具有配置文件的进程类型。其他进程类型在启动进程时使用命令行参数进行配置,或者根据需要从管理节点实时获取配置信息。

This screen also contains three buttons, labelled as and performing the functions described in the following list:

此屏幕还包含三个按钮,标记为并执行以下列表中描述的功能:

  • Deploy cluster: Verify that the configuration is valid. Create any directories required on the cluster hosts, and distribute the configuration files onto the hosts. A progress bar shows how far the deployment has proceeded.

    部署群集:验证配置是否有效。创建集群主机上所需的任何目录,并将配置文件分发到主机上。进度条显示部署的进度。

  • Start cluster: The cluster is deployed as with Deploy cluster, after which all cluster processes are started in the correct order.

    启动集群:集群与部署集群一样部署,之后所有集群进程都以正确的顺序启动。

    Starting these processes may take some time. If the estimated time to completion is too large, the installer provides an opportunity to cancel or to continue of the startup procedure. A progress bar indicates the current status of the startup procedure, as shown here:

    启动这些过程可能需要一些时间。如果估计的完成时间太长,安装程序将提供取消或继续启动过程的机会。进度条指示启动过程的当前状态,如下所示:

    Figure 21.15 Progress Bar With Status of Node Startup Process

    图21.15节点启动过程状态进度条

    Progress bar showing status of node startup process. The small window is titled "Starting cluster" with a progress bar at 40% in the "Starting Cluster processes" phase of the operation.

    The process status icons adjoining the process tree mentioned previously also update with the status of each process.

    与前面提到的进程树相邻的进程状态图标也随每个进程的状态而更新。

  • Stop cluster: After the cluster has been started, you can stop it using this. As with starting the cluster, cluster shutdown is not instantaneous, and may require some time complete. A progress bar, similar to that displayed during cluster startup, shows the approximate current status of the cluster shutdown procedure, as do the process status icons adjoining the process tree.

    停止群集:群集启动后,您可以使用此命令停止它。与启动集群一样,集群关闭不是即时的,可能需要一些时间才能完成。进度条,类似于集群启动期间显示的进度条,显示了集群关机过程的近似当前状态,以及与进程树相邻的进程状态图标。

The Auto-Installer generates a my.cnf file containing the appropriate options for each mysqld process in the cluster.

自动安装程序生成一个my.cnf文件,其中包含集群中每个mysqld进程的适当选项。

21.2.2 The NDB Cluster Auto-Installer (NDB 7.6)

This section describes the web-based graphical configuration installer included as part of the NDB Cluster 7.6 distribution. This version of the installer differs in many respects from that supplied with NDB 7.5 and earlier releases; if you are using NDB 7.5, see Section 21.2.1, “The NDB Cluster Auto-Installer (NDB 7.5)”, for relevant information. Some of the key improvements are listed here:

本节介绍作为ndb cluster 7.6发行版的一部分提供的基于web的图形配置安装程序。此版本的安装程序在许多方面与ndb 7.5和早期版本的安装程序不同;如果您使用的是ndb7.5,请参阅21.2.1节“ndb群集自动安装程序(ndb7.5)”,了解相关信息。以下列出了一些关键改进:

  • Persistent storage in an encrypted file as an alternative to cookie-based storage, now used by default

    加密文件中的持久性存储,作为基于cookie的存储的替代,现在默认使用

  • Secure (HTTPS) connections by default

    默认情况下为安全(https)连接

  • Upgraded Paramiko security library

    升级的paramiko安全库

  • Ability to use passwords for encrypted private keys, and to use different credentials with different hosts

    能够对加密的私钥使用密码,并对不同的主机使用不同的凭据

  • Improved host information retrieval

    改进的主机信息检索

  • Improved configuration; advanced configuration parameters

    改进配置;高级配置参数

See also Section 21.1.4.2, “What is New in NDB Cluster 7.6”.

另见21.1.4.2节,“NDB集群7.6的新增功能”。

Topics discussed in the following sections include an overview of the installer and its parts, software and other requirements for running the installer, navigating the GUI, and using the installer to set up and start or stop an NDB Cluster on one or more host computers.

以下各节讨论的主题包括安装程序及其部件的概述、运行安装程序、导航gui以及使用安装程序在一台或多台主机上设置和启动或停止ndb群集的软件和其他要求。

The NDB Cluster Auto-Installer is made up of two components. The front end is a GUI client implemented as a Web page that loads and runs in a standard Web browser such as Firefox or Microsoft Internet Explorer. The back end is a server process (ndb_setup.py) that runs on the local machine or on another host to which you have access.

ndb集群自动安装程序由两个组件组成。前端是一个gui客户机,实现为一个网页,可以在标准的web浏览器(如firefox或microsoftexnetexplorer)中加载和运行。后端是在本地计算机或您有权访问的其他主机上运行的服务器进程(ndb_setup.py)。

These two components (client and server) communicate with each other using standard HTTP requests and responses. The back end can manage NDB Cluster software programs on any host where the back end user has granted access. If the NDB Cluster software is on a different host, the back end relies on SSH for access, using the Paramiko library for executing commands remotely (see Section 21.2.2.1, “NDB Cluster Auto-Installer Requirements”).

这两个组件(客户机和服务器)使用标准的http请求和响应相互通信。后端可以在后端用户授予访问权限的任何主机上管理ndb群集软件程序。如果ndb集群软件位于不同的主机上,则后端依赖ssh进行访问,使用paramiko库远程执行命令(请参阅第21.2.2.1节“ndb集群自动安装程序要求”)。

21.2.2.1 NDB Cluster Auto-Installer Requirements

This section provides information on supported operating platforms and software, required software, and other prerequisites for running the NDB Cluster Auto-Installer.

本节提供有关支持的操作平台和软件、所需软件以及运行ndb群集自动安装程序的其他先决条件的信息。

Supported platforms.  The NDB Cluster Auto-Installer is available with NDB 8.0 distributions for recent versions of Linux, Windows, Solaris, and MacOS X. For more detailed information about platform support for NDB Cluster and the NDB Cluster Auto-Installer, see https://www.mysql.com/support/supportedplatforms/cluster.html.

支持的平台。ndb cluster auto installer可用于最新版本的linux、windows、solaris和macos x的ndb 8.0发行版。有关ndb cluster和ndb cluster auto installer的平台支持的详细信息,请参阅https://www.mysql.com/support/supportedplatforms/cluster.html。

Supported web browsers.  The web-based installer is supported with recent versions of Firefox and Microsoft Internet Explorer. It should also work with recent versions of Opera, Safari, and Chrome, although we have not thoroughly tested for compability with these browsers.

支持的Web浏览器。最新版本的Firefox和Microsoft Internet Explorer支持基于Web的安装程序。它还应该适用于Opera、Safari和Chrome的最新版本,尽管我们还没有对这些浏览器的兼容性进行彻底的测试。

Required software—server.  The following software must be installed on the host where the Auto-Installer is run:

所需的软件服务器。必须在运行自动安装程序的主机上安装以下软件:

  • Python 2.6 or higher.  The Auto-Installer requires the Python interpreter and standard libraries. If these are not already installed on the system, you may be able to add them using the system's package manager. Otherwise, you can download them from http://python.org/download/.

    Python2.6或更高版本。自动安装程序需要python解释器和标准库。如果系统上尚未安装这些程序,则可以使用系统的包管理器添加它们。否则,您可以从http://python.org/download/下载它们。

  • Paramiko 2 or higher.  This is required to communicate with remote hosts using SSH. You can download it from http://www.lag.net/paramiko/. Paramiko may also be available from your system's package manager.

    帕拉米科2或更高。这是使用ssh与远程主机通信所必需的。您可以从http://www.lag.net/paramiko/下载。paramiko也可以从系统的包管理器获得。

  • Pycrypto version 2.6 or higher.  This cryptography module is required by Paramiko, and can be iunstalled using pip install cryptography. If pip is not installed, and the module is not available using your system's package manage, you can download it from https://www.dlitz.net/software/pycrypto/.

    PyCrypto 2.6或更高版本。paramiko需要此加密模块,可以使用pip install cryptography安装。如果未安装PIP,并且使用系统的包管理无法使用该模块,则可以从https://www.dlitz.net/software/pycrypto/下载该模块。

All of the software in the preceding list is included in the Windows version of the configuration tool, and does not need to be installed separately.

前面列表中的所有软件都包含在配置工具的Windows版本中,不需要单独安装。

The Paramiko and Pycrypto libraries are required only if you intend to deploy NDB Cluster nodes on remote hosts, and are not needed if all nodes are on the same host where the installer is run.

仅当您打算在远程主机上部署ndb群集节点时,才需要paramiko和pycrypto库;如果所有节点都在运行安装程序的同一主机上,则不需要paramiko和pycrypto库。

Required software—remote hosts.  The only software required for remote hosts where you wish to deploy NDB Cluster nodes is the SSH server, which is usually installed by default on Linux and Solaris systems. Several alternatives are available for Windows; for an overview of these, see http://en.wikipedia.org/wiki/Comparison_of_SSH_servers.

所需的软件远程主机。在您希望部署ndb集群节点的远程主机上,唯一需要的软件是ssh服务器,它通常默认安装在linux和solaris系统上。Windows提供了几种替代方案;有关这些方案的概述,请参见http://en.wikipedia.org/wiki/comparison-of-ssh-u-servers。

An additional requirement when using multiple hosts is that it is possible to authenticate to any of the remote hosts using SSH and the proper keys or user credentials, as discussed in the next few paragraphs:

使用多个主机时的另一个要求是,可以使用ssh和适当的密钥或用户凭据对任何远程主机进行身份验证,如下几段所述:

Authentication and security.  Three basic security or authentication mechanisms for remote access are available to the Auto-Installer, which we list and describe here:

身份验证和安全。自动安装程序提供了三种用于远程访问的基本安全或身份验证机制,我们在此列出并描述:

  • SSH.  A secure shell connection is used to enable the back end to perform actions on remote hosts. For this reason, an SSH server must be running on the remote host. In addition, the operating system user running the installer must have access to the remote server, either with a user name and password, or by using public and private keys.

    宋承宪。安全外壳连接用于使后端能够在远程主机上执行操作。因此,ssh服务器必须在远程主机上运行。此外,运行安装程序的操作系统用户必须具有访问远程服务器的权限,可以使用用户名和密码,也可以使用公钥和私钥。

    Important

    You should never use the system root account for remote access, as this is extremely insecure. In addition, mysqld cannot normally be started by system root. For these and other reasons, you should provide SSH credentials for a regular user account on the target system, and not for system root. For more information about this issue, see Section 6.1.5, “How to Run MySQL as a Normal User”.

    千万不要使用系统根帐户进行远程访问,因为这是非常不安全的。此外,mysqld通常不能由系统根启动。出于这些和其他原因,您应该为目标系统上的常规用户帐户提供ssh凭据,而不是为系统根帐户提供ssh凭据。有关此问题的详细信息,请参阅6.1.5节“如何以普通用户身份运行mysql”。

  • HTTPS.  Remote communication between the Web browser front end and the back end is not encrypted by default, which means that information such as the user's SSH password is transmitted as cleartext that is readable to anyone. For communication from a remote client to be encrypted, the back end must have a certificate, and the front end must communicate with the back end using HTTPS rather than HTTP. Enabling HTTPS is accomplished most easily through issuing a self-signed certificate. Once the certificate is issued, you must make sure that it is used. You can do this by starting ndb_setup.py from the command line with the --use-https (-S) and --cert-file (-c) options.

    https。默认情况下,web浏览器前端和后端之间的远程通信不加密,这意味着用户的ssh密码等信息以明文形式传输,任何人都可以读取。要加密来自远程客户端的通信,后端必须有证书,前端必须使用https而不是http与后端通信。启用https最容易通过颁发自签名证书来实现。颁发证书后,必须确保使用了该证书。通过使用--use https(-s)和--cert file(-c)选项从命令行启动ndb_setup.py可以完成此操作。

    A sample certificate file cfg.pem is included and is used by default. This file is located in the mcc directory under the installation share directory; on Linux, the full path to the file is normally /usr/share/mysql/mcc/cfg.pem. On Windows systems, this is usually C:\Program Files\MySQL\MySQL Server 5.7\share\mcc\cfg.pem. Letting the default be used means that, for testing purposes, you can simply start the installer with the -S option to use an HTTPS connection between the browser and the back end.

    包含一个示例证书文件cfg.pem,默认情况下使用该文件。该文件位于安装共享目录下的mcc目录中;在linux上,该文件的完整路径通常为/usr/share/mysql/mcc/cfg.pem。在Windows系统上,通常是C:\ Program Files\MySQL\MySQL Server 5.7\share\mcc\cfg.pem。允许使用默认值意味着,出于测试目的,您可以使用-s选项启动安装程序,以便在浏览器和后端之间使用https连接。

    The Auto-Installer saves the configuration file for a given cluster mycluster01 as mycluster01.mcc in the home directory of the user invoking the ndb_setup.py executable. This file is encrypted with a passphrase supplied by the user (using Fernet); because HTTP transmits the passphrase in the clear, it is strongly recommended that you always use an HTTPS connection to access the Auto-Installer on a remote host.

    自动安装程序将给定集群mycluster01的配置文件保存为mycluster01.mcc,保存在调用ndb_setup.py可执行文件的用户的主目录中。此文件使用用户提供的密码短语(使用fernet)进行加密;由于http在clear中传输密码短语,强烈建议您始终使用https连接访问远程主机上的自动安装程序。

  • Certificate-based authentication.  The back end ndb_setup.py process can execute commands on the local host as well as remote hosts. This means that anyone connecting to the back end can take charge of how commands are executed. To reject unwanted connections to the back end, a certificate may be required for authentication of the client. In this case, a certificate must be issued by the user, installed in the browser, and made available to the back end for authentication purposes. You can enact this requirement (together with or in place of password or key authentication) by starting ndb_setup.py with the --ca-certs-file (-a) option.

    基于证书的身份验证。后端ndb_setup.py进程可以在本地主机和远程主机上执行命令。这意味着任何连接到后端的人都可以负责命令的执行方式。若要拒绝到后端的不需要的连接,可能需要证书来验证客户端。在这种情况下,证书必须由用户颁发,安装在浏览器中,并提供给后端用于身份验证。您可以通过使用--ca certs file(-a)选项启动ndb_setup.py来制定此要求(连同或代替密码或密钥身份验证)。

There is no need or requirement for secure authentication when the client browser is running on the same host as the Auto-Installer back end.

当客户端浏览器与自动安装程序后端在同一主机上运行时,不需要或不需要安全身份验证。

See also Section 21.5.12, “NDB Cluster Security Issues”, which discusses security considerations to take into account when deploying NDB Cluster, as well as Chapter 6, Security, for more general MySQL security information.

另请参见第21.5.12节“ndb cluster安全问题”,其中讨论了在部署ndb cluster时要考虑的安全问题,以及第6章“安全性”,以了解更多一般的mysql安全信息。

21.2.2.2 Using the NDB Cluster Auto-Installer

The NDB Cluster Auto-Installer interface is made up of several pages, each corresponding to a step in the process used to configure and deploy an NDB Cluster. These pages are listed here, in order:

ndb cluster自动安装程序界面由几个页面组成,每个页面对应于用于配置和部署ndb集群的过程中的一个步骤。这些页面按以下顺序列出:

  • Welcome: Begin using the Auto-Installer by choosing either to configure a new NDB Cluster, or to continue configuring an existing one.

    欢迎:开始使用自动安装程序,要么选择配置新的NDB集群,要么继续配置现有的NDB集群。

  • Define Cluster: Set basic information about the cluster as a whole, such as name, hosts, and load type. Here you can also set the SSH authentication type for accessing remote hosts, if needed.

    define cluster:设置集群整体的基本信息,如名称、主机和负载类型。如果需要,您还可以在这里设置用于访问远程主机的ssh身份验证类型。

  • Define Hosts: Identify the hosts where you intend to run NDB Cluster processes.

    define hosts:标识要在其中运行ndb集群进程的主机。

  • Define Processes: Assign one or more processes of a given type or types to each cluster host.

    定义进程:为每个群集主机分配一个或多个给定类型的进程。

  • Define Parameters: Set configuration attributes for processes or types of processes.

    定义参数:为进程或进程类型设置配置属性。

  • Deploy Configuration: Deploy the cluster with the configuration set previously; start and stop the deployed cluster.

    部署配置:使用先前设置的配置部署集群;启动和停止部署的集群。

NDB Cluster Installer Settings and Help Menus

These menus are shown on all screens except for the Welcome screen. They provide access to installer settings and information. The Settings menu is shown here in more detail:

这些菜单显示在除欢迎屏幕外的所有屏幕上。它们提供对安装程序设置和信息的访问。“设置”菜单在此处显示得更详细:

Figure 21.16 NDB Cluster Auto-Installer Settings menu

图21.16 ndb集群自动安装设置菜单

Content is described in the surrounding text.

The Settings menu has the following entries:

“设置”菜单包含以下条目:

  • Automatically save configuration as cookies: Save your configuration information—such as host names, process data, and parameter values—as a cookie in the browser. When this option is chosen, all information except any SSH password is saved. This means that you can quit and restart the browser, and continue working on the same configuration from where you left off at the end of the previous session. This option is enabled by default.

    自动将配置保存为cookies:将配置信息(如主机名、进程数据和参数值)保存为浏览器中的cookie。选择此选项时,将保存除任何ssh密码之外的所有信息。这意味着您可以退出并重新启动浏览器,然后从上次会话结束时关闭的位置继续使用相同的配置。默认情况下,此选项处于启用状态。

    The SSH password is never saved; if you use one, you must supply it at the beginning of each new session.

    ssh密码永远不会被保存;如果您使用一个密码,则必须在每个新会话开始时提供它。

  • Show advanced configuration options: Shows by default advanced configuration parameters where available.

    显示高级配置选项:默认情况下显示可用的高级配置参数。

    Once set, the advanced parameters continue to be used in the configuration file until they are explicitly changed or reset. This is regardless of whether the advanced parameters are currently visible in the installer; in other words, disabling the menu item does not reset the values of any of these parameters.

    设置后,高级参数将继续在配置文件中使用,直到它们被显式更改或重置。这与高级参数当前是否在安装程序中可见无关;换句话说,禁用菜单项不会重置任何这些参数的值。

    You can also toggle the display of advanced parameters for individual processes on the Define Parameters screen.

    还可以在“定义参数”屏幕上切换各个进程的高级参数显示。

    This option is disabled by default.

    默认情况下禁用此选项。

  • Automatically get resource information for new hosts: Query new hosts automatically for hardware resource information to pre-populate a number of configuration options and values. In this case, the suggested values are not mandatory, but they are used unless explicitly changed using the appropriate editing options in the installer.

    自动获取新主机的资源信息:自动查询新主机以获取硬件资源信息,以预填充许多配置选项和值。在这种情况下,建议的值不是必需的,但除非使用安装程序中的相应编辑选项显式更改,否则将使用这些值。

    This option is enabled by default.

    默认情况下,此选项处于启用状态。

The installer Help menu is shown here:

安装程序帮助菜单如下所示:

Figure 21.17 NDB Cluster Auto-Installer Help menu

图21.17 ndb集群自动安装程序帮助菜单

Content is described in the surrounding text.

The Help menu provides several options, described in the following list:

“帮助”菜单提供了几个选项,如下表所示:

  • Contents: Show the built-in user guide. This is opened in a separate browser window, so that it can be used simultaneously with the installer without interrupting workflow.

    内容:显示内置的用户指南。这是在一个单独的浏览器窗口中打开的,因此可以在不中断工作流的情况下与安装程序同时使用。

  • Current page: Open the built-in user guide to the section describing the page currently displayed in the installer.

    当前页:打开内置的用户指南,找到描述安装程序中当前显示的页的部分。

  • About: open a dialog displaying the installer name and the version number of the NDB Cluster distribution with which it was supplied.

    关于:打开一个对话框,显示安装程序名称和提供它的ndb群集发行版的版本号。

The Auto-Installer also provides context-sensitive help in the form of tooltips for most input widgets.

自动安装程序还以工具提示的形式为大多数输入小部件提供上下文相关的帮助。

In addition, the names of most NDB configuration parameters are linked to their descriptions in the online documentation. The documentation is displayed in a separate browser window.

此外,大多数ndb配置参数的名称都链接到联机文档中的描述。文档将显示在单独的浏览器窗口中。

The next section discusses starting the Auto-Installer. The sections immediately following it describe in greater detail the purpose and function of each of these pages in the order listed previously.

下一节讨论如何启动自动安装程序。紧接着的章节按照前面列出的顺序更详细地描述了每一页的目的和功能。

Starting the NDB Cluster Auto-Installer

The Auto-Installer is provided together with the NDB Cluster software. Separate RPM and .deb packages containing only the Auto-Installer are also available for many Linux distributions. (See Section 21.2, “NDB Cluster Installation”.)

自动安装程序与ndb群集软件一起提供。许多Linux发行版还提供了单独的RPM和.deb包,其中仅包含自动安装程序。(见第21.2节“ndb集群安装”。)

The present section explains how to start the installer. You can do by invoking the ndb_setup.py executable.

本节说明如何启动安装程序。可以通过调用ndb_setup.py可执行文件来完成。

User and privileges

You should run the ndb_setup.py as a normal user; no special privileges are needed to do so. You should not run this program as the mysql user, or using the system root or Administrator account; doing so may cause the installation to fail.

您应该以普通用户的身份运行ndb_setup.py;这样做不需要特殊权限。不应以mysql用户身份运行此程序,也不应使用系统根或管理员帐户;这样做可能会导致安装失败。

ndb_setup.py is found in the bin within the NDB Cluster installation directory; a typical location might be /usr/local/mysql/bin on a Linux system or C:\Program Files\MySQL\MySQL Server 5.7\bin on a Windows system. This can vary according to where the NDB Cluster software is installed on your system, and the installation method.

ndb_setup.py位于ndb cluster安装目录中的bin中;Linux系统上的典型位置可能是/usr/local/mysql/bin,Windows系统上的典型位置可能是C:\ Program Files\mysql\mysql server 5.7\bin。这可能因系统上安装ndb群集软件的位置和安装方法而异。

On Windows, you can also start the installer by running setup.bat in the NDB Cluster installation directory. When invoked from the command line, this batch file accepts the same options as ndb_setup.py.

在Windows上,也可以通过在ndb cluster安装目录中运行setup.bat来启动安装程序。从命令行调用时,此批处理文件接受与ndb_setup.py相同的选项。

ndb_setup.py can be started with any of several options that affect its operation, but it is usually sufficient to allow the default settings be used, in which case you can start ndb_setup.py by either of the following two methods:

ndb_setup.py可以使用影响其操作的任意选项启动,但通常足以允许使用默认设置,在这种情况下,可以通过以下两种方法之一启动ndb_setup.py:

  1. Navigate to the NDB Cluster bin directory in a terminal and invoke it from the command line, without any additional arguments or options, like this:

    导航到终端中的ndb cluster bin目录,并从命令行调用它,而不使用任何其他参数或选项,如下所示:

    shell> ndb_setup.py
    Running out of install dir: /usr/local/mysql/bin
    Starting web server on port 8081
    URL is https://localhost:8081/welcome.html
    deathkey=627876
    Press CTRL+C to stop web server.
    The application should now be running in your browser.
    (Alternatively you can navigate to https://localhost:8081/welcome.html to start it)
    

    This works regardless of operating platform.

    这与操作平台无关。

  2. Navigate to the NDB Cluster bin directory in a file browser (such as Windows Explorer on Windows, or Konqueror, Dolphin, or Nautilus on Linux) and activate (usually by double-clicking) the ndb_setup.py file icon. This works on Windows, and should work with most common Linux desktops as well.

    在文件浏览器中导航到ndb cluster bin目录(如Windows上的Windows资源管理器,或Linux上的Konqueror、Dolphin或Nautilus),然后激活(通常双击)ndb_setup.py文件图标。这可以在windows上运行,而且应该可以在大多数常见的linux桌面上运行。

    On Windows, you can also navigate to the NDB Cluster installation directory and activate the setup.bat file icon.

    在windows上,您还可以导航到ndb cluster安装目录并激活setup.bat文件图标。

In either case, once ndb_setup.py is invoked, the Auto-Installer's Welcome screen should open in the system's default web browser. If not, you should be able to open the page http://localhost:8081/welcome.html or https://localhost:8081/welcome.html manually in the browser.

在这两种情况下,一旦调用ndb_setup.py,自动安装程序的欢迎屏幕将在系统的默认web浏览器中打开。否则,您应该能够在浏览器中手动打开页面http://localhost:8081/welcome.html或https://localhost:8081/welcome.html。

In some cases, you may wish to use non-default settings for the installer, such as specifying HTTPS for connections, or a different port for the Auto-Installer's included web server to run on, in which case you must invoke ndb_setup.py with one or more startup options with values overriding the necessary defaults. The same startup options can be used on Windows systems with the setup.bat file supplied for such platforms in the NDB Cluster software distribution. This can be done using the command line, but if you want or need to start the installer from a desktop or file browser while employing one or more of these options, it is also possible to create a script or batch file containing the proper invocation, then to double-click its file icon in the file browser to start the installer. (On Linux systems, you might also need to make the script file executable first.) If you plan to use the Auto-Installer from a remote host, you should start using the -S option. For information about this and other advanced startup options for the NDB Cluster Auto-Installer, see Section 21.4.27, “ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster”.

在某些情况下,您可能希望为安装程序使用非默认设置,例如为连接指定https,或为自动安装程序包含的Web服务器指定其他端口以在其上运行,在这种情况下,您必须使用一个或多个启动选项调用ndb_setup.py,其中的值将覆盖必要的默认值。在windows系统上也可以使用相同的启动选项,在ndb集群软件发行版中为此类平台提供setup.bat文件。这可以使用命令行完成,但如果您希望或需要在使用其中一个或多个选项时从桌面或文件浏览器启动安装程序,也可以创建包含正确调用的脚本或批处理文件,然后双击文件浏览器中的文件图标启动安装程序。(在Linux系统上,可能还需要首先使脚本文件可执行。)如果计划从远程主机使用自动安装程序,则应开始使用-s选项。有关ndb cluster自动安装程序的此启动选项和其他高级启动选项的信息,请参阅21.4.27节“ndb_setup.py-为ndb cluster启动基于浏览器的自动安装程序”。

NDB Cluster Auto-Installer Welcome Screen

The Welcome screen is loaded in the default browser when ndb_setup.py is invoked. The first time the Auto-Installer is run (or if for some other reason there are no existing configurations), this screen appears as shown here:

调用ndb_setup.py时,欢迎屏幕将加载在默认浏览器中。第一次运行自动安装程序(或者如果由于其他原因没有现有的配置),这个屏幕显示如下:

Figure 21.18 The NDB Cluster Auto-Installer Welcome screen, first run

图21.18首次运行的ndb集群自动安装程序欢迎屏幕

Content is described in the surrounding text.

In this case, the only choice of cluster listed is for configuration of a new cluster, and both the View Cfg and Continue buttons are inactive.

在这种情况下,所列集群的唯一选择是配置一个新集群,并且view cfg和continue按钮都处于非活动状态。

To create a new configuration, enter and confirm a passphrase in the text boxes provided. When this has been done, you can click Continue to proceed to the Define Cluster screen where you can assign a name to the new cluster.

要创建新配置,请在提供的文本框中输入并确认密码短语。完成后,您可以单击Continue(继续)进入Define Cluster(定义群集)屏幕,从中可以为新群集分配名称。

If you have previously created one or more clusters with the Auto-Installer, they are listed by name. This example shows an existing cluster named mycluster-1:

如果以前使用自动安装程序创建了一个或多个群集,则按名称列出这些群集。这个示例显示了一个名为MyCultStult-1的现有集群:

Figure 21.19 The NDB Cluster Auto-Installer Welcome screen, with previously created cluster mycluster-1

图21.19 ndb cluster auto installer欢迎屏幕,先前创建的是cluster mycluster-1

Content is described in the surrounding text.

To view the configuration for and work with a given cluster, select the radiobutton next to its name in the list, then enter and confirm the passphrase that was used to create it. When you have done this correctly, you can click View Cfg to view and edit this cluster's configuration.

要查看给定群集的配置并使用该群集,请在列表中选择其名称旁边的单选按钮,然后输入并确认用于创建该群集的密码短语。正确完成此操作后,可以单击“查看CFG”查看和编辑此群集的配置。

NDB Cluster Auto-Installer Define Cluster Screen

The Define Cluster screen is appears following the Welcome screen, and is used for setting general properties of the cluster. The layout of the Define Cluster screen is shown here:

“定义群集”屏幕出现在欢迎屏幕之后,用于设置群集的常规属性。define cluster屏幕的布局如下所示:

Figure 21.20 The NDB Cluster Auto-Installer Define Cluster screen

图21.20 ndb集群自动安装程序定义集群屏幕

Content is described in the surrounding text.

This screen and subsequent screens also include Settings and Help menus which are described later in this section; see NDB Cluster Installer Settings and Help Menus.

此屏幕和随后的屏幕还包括本节后面介绍的设置和帮助菜单;请参阅ndb cluster installer settings和帮助菜单。

The Define Cluster screen allows you to set three sorts of properties for the cluster: cluster properties, SSH properties, and installation properties.

define cluster屏幕允许您为集群设置三种属性:集群属性、ssh属性和安装属性。

Cluster properties that can be set on this screen are listed here:

可在此屏幕上设置的群集属性如下所示:

  • Cluster name: A name that identifies the cluster; in this example, this is mycluster-1. The name is set on the previous screen and cannot be changed here.

    集群名称:标识集群的名称;在本例中,这是mycluster-1。名称已在上一个屏幕上设置,不能在此处更改。

  • Host list: A comma-delimited list of one or more hosts where cluster processes should run. By default, this is 127.0.0.1. If you add remote hosts to the list, you must be able to connect to them using the credentials supplied as SSH properties.

    主机列表:一个或多个主机的逗号分隔列表,其中应运行群集进程。默认情况下,这是127.0.0.1。如果将远程主机添加到列表中,则必须能够使用作为ssh属性提供的凭据连接到它们。

  • Application type: Choose one of the following:

    应用程序类型:选择以下选项之一:

    1. Simple testing: Minimal resource usage for small-scale testing. This the default. Not intended for production environments.

      简单测试:用于小规模测试的最小资源使用。这是默认值。不适用于生产环境。

    2. Web: Maximize performance for the given hardware.

      Web:为给定硬件最大化性能。

    3. Real-time: Maximize performance while maximizing sensitivity to timeouts in order to minimize the time needed to detect failed cluster processes.

      实时:最大限度地提高性能,同时最大限度地提高对超时的敏感性,以尽量减少检测失败的集群进程所需的时间。

  • Write load: Choose a level for the anticipated number of writes for the cluster as a whole. You can choose any one of the following levels:

    写入负载:为整个群集的预期写入次数选择一个级别。您可以选择下列任一级别:

    1. Low: The expected load includes fewer than 100 write transactions for second.

      低:预期的负载包括少于100个秒的写事务。

    2. Medium: The expected load includes 100 to 1000 write transactions per second; this is the default.

      中等:预期负载包括每秒100到1000个写事务;这是默认值。

    3. High: The expected load includes more than 1000 write transactions per second.

      高:预期负载包括每秒1000多个写事务。

SSH properties are described in the following list:

ssh属性在以下列表中描述:

  • Key-Based SSH: Check this box to use key-enabled login to the remote host. If checked, the key user and passphrase must also be supplied; otherwise, a user and password for a remote login account are needed.

    基于密钥的ssh:选中此框可使用启用密钥的登录到远程主机。如果选中,则还必须提供密钥用户和密码短语;否则,需要远程登录帐户的用户和密码。

  • User: Name of user with remote login access.

    user:具有远程登录访问权限的用户的名称。

  • Password: Password for remote user.

    密码:远程用户的密码。

  • Key user: Name of the user for whom the key is valid, if not the same as the operating system user.

    密钥用户:密钥对其有效的用户的名称(如果与操作系统用户不同)。

  • Key passphrase: Passphrase for the key, if required.

    密钥密码短语:密钥的密码短语(如果需要)。

  • Key file: Path to the key file. The default is ~/.ssh/id_rsa.

    密钥文件:密钥文件的路径。默认为~/.ssh/id_rsa。

The SSH properties set on this page apply to all hosts in the cluster. They can be overridden for a given host by editing that hosts's properties on the Define Hosts screen.

此页上设置的ssh属性适用于群集中的所有主机。通过在“定义主机”屏幕上编辑给定主机的属性,可以覆盖给定主机的属性。

Two installation properties can also be set on this screen:

在此屏幕上还可以设置两个安装属性:

  • Install MySQL Cluster: This setting determines the source from which the Auto-Installer installs NDB Cluster software, if any, on the cluster hosts. Possible values and their effects are listed here:

    安装mysql cluster:此设置确定自动安装程序在群集主机上安装ndb群集软件(如果有)的源。可能的值及其影响如下:

    1. DOCKER: Try to install the MySQL Cluster Docker image from https://hub.docker.com/r/mysql/mysql-cluster/ on each host

      docker:尝试在每个主机上安装来自https://hub.docker.com/r/mysql/mysql-cluster/的mysql集群docker镜像

    2. REPO: Try to install the NDB Cluster software from the MySQL Repositories on each host

      repo:尝试从每个主机上的mysql存储库安装ndb集群软件

    3. BOTH: Try to install either the Docker image or the software from the repository on each host, giving preference to the repository

      两者:尝试在每台主机上安装Docker映像或存储库中的软件,优先选择存储库

    4. NONE: Do not install the NDB Cluster software on the hosts; this is the default

      无:不要在主机上安装ndb群集软件;这是默认设置

  • Open FW Ports: Check this checkbox to have the installer attempt to open ports required by NDB CLuster processes on all hosts.

    打开固件端口:选中此复选框,安装程序将尝试打开所有主机上的ndb群集进程所需的端口。

The next figure shows the Define Cluster page with settings for a small test cluster with all nodes running on localhost:

下一个图显示Define Cluster页面,其中包含在本地主机上运行所有节点的小型测试群集的设置:

Figure 21.21 The NDB Cluster Auto-Installer Define Cluster screen, with settings for a test cluster

图21.21 ndb集群自动安装程序定义集群屏幕,设置测试集群

Content is described in the surrounding text.

After making the desired settings, you can save them to the configuration file and proceed to the Define Hosts screen by clicking the Save & Next button.

完成所需的设置后,您可以将它们保存到配置文件中,并通过单击“保存和下一步”按钮进入“定义主机”屏幕。

If you exit the installer without saving, no changes are made to the configuration file.

如果在不保存的情况下退出安装程序,则不会对配置文件进行任何更改。

NDB Cluster Auto-Installer Define Hosts Screen

The Define Hosts screen, shown here, provides a means of viewing and specifying several key properties of each cluster host:

此处显示的“定义主机”屏幕提供了查看和指定每个群集主机的几个关键属性的方法:

Figure 21.22 NDB Cluster Define Hosts screen, start

图21.22 ndb cluster define hosts屏幕,开始

Content is described in the surrounding text.

Properties shown include the following:

显示的属性包括:

  • Host: Name or IP address of this host

    主机:此主机的名称或IP地址

  • Res.info: Shows OK if the installer was able to retrieve requested resource information from this host

    res.info:显示安装程序是否能够从此主机检索请求的资源信息

  • Platform: Operating system or platform

    平台:操作系统或平台

  • Memory (MB): Amount of RAM on this host

    内存(MB):此主机上的RAM量

  • Cores: Number of CPU cores available on this host

    Cores:此主机上可用的CPU内核数

  • MySQL Cluster install directory: Path to directory where the NDB Cluster software is installed on this host; defaults to /usr/local/bin

    mysql cluster install directory:此主机上安装ndb群集软件的目录的路径;默认为/usr/local/bin

  • MySQL Cluster data directory: Path to directory used for data by NDB Cluster processes on this host; defaults to /var/lib/mysql-cluster.

    mysql cluster data directory:此主机上ndb集群进程用于数据的目录的路径;默认为/var/lib/mysql cluster。

  • DiskFree: Free disk space in bytes

    disk free:可用磁盘空间(字节)

    For hosts with multiple disks, only the space available on the disk used for the data directory is shown.

    对于具有多个磁盘的主机,仅显示用于数据目录的磁盘上的可用空间。

This screen also provides an extended view for each host that includes the following properties:

此屏幕还为每个主机提供扩展视图,其中包括以下属性:

  • FDQN: This host's fully qualified domain name, used by the installer to connect with it, distribute configuration information to it, and start and stop cluster processes on it.

    fdqn:此主机的完全限定域名,安装程序使用该域名与其连接,向其分发配置信息,并在其上启动和停止群集进程。

  • Internal IP: The IP address used for communication with cluster processes running on this host by processes running elsewhere.

    内部IP:用于与在该主机上运行的群集进程通信的IP地址,这些进程在其他地方运行。

  • OS Details: Detailed operating system name and version information.

    操作系统详细信息:详细的操作系统名称和版本信息。

  • Open FW: If this checkbox is enabled, the installer attempts to open ports in the host's firewall needed by cluster processes.

    open fw:如果启用此复选框,安装程序将尝试打开群集进程所需的主机防火墙中的端口。

  • REPO URL: URL for MySQL NDB Cluster repository

    repo url:mysql ndb集群存储库的url

  • DOCKER URL: URL for MySQL NDB CLuster Docker images; for NDB 8.0, this is mysql/mysql-cluster:8.0.

    docker url:mysql ndb cluster docker images的url;对于ndb 8.0,这是mysql/mysql cluster:8.0。

  • Install: If this checkbox is enabled, the Auto-Installer attempts to install the NDB Cluster software on this host

    安装:如果启用此复选框,自动安装程序将尝试在此主机上安装ndb群集软件

The extended view is shown here:

扩展视图如下所示:

Figure 21.23 NDB Cluster Define Hosts screen, extended host info view

图21.23 ndb cluster define hosts屏幕,扩展主机信息视图

Content is described in the surrounding text.

All cells in the display are editable, with the exceptions of those in the Host, Res.info, and FQDN columns.

显示中的所有单元格都是可编辑的,主机、res.info和fqdn列中的单元格除外。

Be aware that it may take some time for information to be retrieved from remote hosts. Fields for which no value could be retrieved are indicated with an ellipsis (). You can retry the fetching of resource information from one or more hosts by selecting the hosts in the list and then clicking the Refresh selected host(s) button.

请注意,从远程主机检索信息可能需要一些时间。无法检索值的字段用省略号(…)表示。通过在列表中选择主机,然后单击“刷新选定主机”按钮,可以重试从一个或多个主机获取资源信息。

Adding and Removing Hosts

You can add one or more hosts by clicking the Add Host button and entering the required properties where indicated in the Add new host dialog, shown here:

通过单击“添加主机”按钮并在“添加新主机”对话框中指定的位置输入所需属性,可以添加一个或多个主机,如下所示:

Figure 21.24 NDB Cluster Add Host dialog

图21.24 ndb集群添加主机对话框

Content is described in the surrounding text.

This dialog includes the following fields:

此对话框包含以下字段:

  • Host name: A comma-separated list of one or more host names, IP addresses, or both. These must be accessible from the host where the Auto-Installer is running.

    主机名:用逗号分隔的一个或多个主机名、IP地址或两者的列表。必须可以从运行自动安装程序的主机访问这些文件。

  • Host internal IP (VPN): If you are setting up the cluster to run on a VPN or other internal network, enter the IP address or addresses used for contact by cluster nodes on other hosts.

    主机内部IP(VPN):如果要将群集设置为在VPN或其他内部网络上运行,请输入其他主机上的群集节点用于联系的一个或多个IP地址。

  • Key-based auth: If checked, enables key-based authentication. You can enter any additional needed information in the User, Passphrase, and Key file fields.

    基于密钥的身份验证:如果选中,则启用基于密钥的身份验证。您可以在“用户”、“密码短语”和“密钥文件”字段中输入任何其他所需信息。

  • Ordinary login: If accessing this host using a password-based login, enter the appropriate information in the User and Password fields.

    普通登录:如果使用基于密码的登录访问此主机,请在“用户”和“密码”字段中输入适当的信息。

  • Open FW ports: Selecting this checkbox allows the installer try opening any ports needed by cluster processes in this host's firewall.

    打开固件端口:选中此复选框允许安装程序尝试打开此主机防火墙中群集进程所需的任何端口。

  • Configure installation: Checking this allows the Auto-Install to attempt to set up the NDB Cluster software on this host.

    配置安装:选中此项将允许自动安装尝试在此主机上设置ndb群集软件。

To save the new host and its properties, click Add. If you wish to cancel without saving any changes, click Cancel instead.

要保存新主机及其属性,请单击“添加”。如果要取消而不保存任何更改,请单击“取消”。

Similarly, you can remove one or more hosts using the button labelled Remove selected host(s). When you remove a host, any process which was configured for that host is also removed.

类似地,可以使用标记为“删除选定主机”的按钮删除一个或多个主机。删除主机时,也会删除为该主机配置的任何进程。

Warning

Remove selected host(s) acts immediately. There is no confirmation dialog. If you remove a host in error, you must re-enter its name and properties manually using Add host.

删除选定主机将立即执行操作。没有确认对话框。如果删除出错的主机,则必须使用“添加主机”手动重新输入其名称和属性。

If the SSH user credentials on the Define Cluster screen are changed, the Auto-Installer attempts to refresh the resource information from any hosts for which information is missing.

如果define cluster屏幕上的ssh用户凭据发生更改,则自动安装程序将尝试从缺少信息的任何主机刷新资源信息。

You can edit the host's platform name, hardware resource information, installation directory, and data directory by clicking the corresponding cell in the grid, by selecting one or more hosts and clicking the button labelled Edit selected host(s). This causes a dialog box to appear, in which these fields can be edited, as shown here:

通过单击网格中的相应单元格,选择一个或多个主机并单击标记为“编辑选定主机”的按钮,可以编辑主机的平台名称、硬件资源信息、安装目录和数据目录。这将出现一个对话框,可以在其中编辑这些字段,如下所示:

Figure 21.25 NDB Cluster Auto-Installer Edit Hosts dialog

图21.25 ndb集群自动安装程序编辑主机对话框

Content is described in the surrounding text.

When more than one host is selected, any edited values are applied to all selected hosts.

选择多个主机时,任何编辑的值都将应用于所有选定主机。

Once you have entered all desired host information, you can use the Save & Next button to save the information to the cluster's configuration file and proceed to the Define Processes screen, where you can set up NDB Cluster processes on one or more hosts.

输入完所有所需的主机信息后,可以使用“保存和下一步”按钮将信息保存到群集的配置文件中,然后转到“定义进程”屏幕,从中可以在一个或多个主机上设置ndb群集进程。

NDB Cluster Auto-Installer Define Processes Screen

The Define Processes screen, shown here, provides a way to assign NDB Cluster processes (nodes) to cluster hosts:

此处显示的“定义进程”屏幕提供了将ndb群集进程(节点)分配给群集主机的方法:

Figure 21.26 NDB Cluster Auto-Installer Define Processes dialog

图21.26 ndb集群自动安装程序定义进程对话框

Content is described in the surrounding text. The example process tree topology includes "Any host" and "localhost", as defined earlier. The localhost tree includes the following processes: Management mode 1, API node 1, API node 2, API node 3, SQL node 1, SQL node 2, Multi threaded data node 1, and Multi threaded data node 2. This panel also includes "Add process" and "Del[ete] process" buttons.

This screen contains a process tree showing cluster hosts and processes set up to run on each one, as well as a panel which displays information about the item currently selected in the tree.

此屏幕包含一个进程树,其中显示群集主机和设置为在每个主机上运行的进程,以及一个面板,其中显示有关树中当前选定项的信息。

When this screen is accessed for the first time for a given cluster, a default set of processes is defined for you, based on the number of hosts. If you later return to the Define Hosts screen, remove all hosts, and add new hosts, this also causes a new default set of processes to be defined.

当第一次访问给定集群的此屏幕时,将根据主机数量为您定义一组默认进程。如果稍后返回“定义主机”屏幕,删除所有主机并添加新主机,这也会导致定义新的默认进程集。

NDB Cluster processes are of the types described in this list:

ndb群集进程属于此列表中描述的类型:

  • Management node.  Performs administrative tasks such as stopping individual data nodes, querying node and cluster status, and making backups. Executable: ndb_mgmd.

    管理节点。执行管理任务,如停止单个数据节点、查询节点和群集状态以及进行备份。可执行文件:ndb_mgmd。

  • Single-threaded data node.  Stores data and executes queries. Executable: ndbd.

    单线程数据节点。存储数据并执行查询。可执行文件:ndbd。

  • Multi threaded data node.  Stores data and executes queries with multiple worker threads executing in parallel. Executable: ndbmtd.

    多线程数据节点。存储数据并使用并行执行的多个工作线程执行查询。可执行文件:ndbmtd。

  • SQL node.  MySQL server for executing SQL queries against NDB. Executable: mysqld.

    SQL节点。用于对ndb执行sql查询的mysql服务器。可执行文件:mysqld。

  • API node.  A client accessing data in NDB by means of the NDB API or other low-level client API, rather than by using SQL. See MySQL NDB Cluster API Developer Guide, for more information.

    API节点。通过ndb api或其他低级客户端api而不是使用sql访问ndb中的数据的客户端。有关详细信息,请参阅mysql ndb cluster api developer guide。

For more information about process (node) types, see Section 21.1.1, “NDB Cluster Core Concepts”.

有关进程(节点)类型的更多信息,请参阅21.1.1节,“ndb集群核心概念”。

Processes shown in the tree are numbered sequentially by type, for each host—for example, SQL node 1, SQL node 2, and so on—to simplify identification.

树中显示的进程按类型顺序编号,用于每个主机,例如SQL节点1、SQL节点2等,以简化标识。

Each management node, data node, or SQL process must be assigned to a specific host, and is not allowed to run on any other host. An API node may be assigned to a single host, but this is not required. Instead, you can assign it to the special Any host entry which the tree also contains in addition to any other hosts, and which acts as a placeholder for processes that are allowed to run on any host. Only API processes may use this Any host entry.

每个管理节点、数据节点或SQL进程都必须分配给特定主机,并且不允许在任何其他主机上运行。可以将api节点分配给单个主机,但这不是必需的。相反,您可以将它分配给特殊的any host条目,除了任何其他主机之外,树还包含该条目,该条目充当允许在任何主机上运行的进程的占位符。只有api进程可以使用任何主机条目。

Adding processes.  To add a new process to a given host, either right-click that host's entry in the tree, then select the Add process popup when it appears, or select a host in the process tree, and press the Add process button below the process tree. Performing either of these actions opens the add process dialog, as shown here:

添加流程。若要将新进程添加到给定主机,请右键单击树中该主机的条目,然后在出现时选择“添加进程”弹出窗口,或在进程树中选择一个主机,然后按进程树下的“添加进程”按钮。执行这些操作之一将打开“添加进程”对话框,如下所示:

Figure 21.27 NDB Cluster Auto-Installer Add Process Dialog

图21.27 ndb集群自动安装程序添加过程对话框

Most content is described in the surrounding text. Shows a window titled "Add new process" with two options: "Select process type:" that shows a select box with "API node" selected, and "Enter process name:" with "API node 4" entered as plain text. Action buttons include "Cancel" and "Add".

Here you can select from among the available process types described earlier this section; you can also enter an arbitrary process name to take the place of the suggested value, if desired.

在这里,您可以从本节前面描述的可用进程类型中进行选择;如果需要,您还可以输入任意进程名称来代替建议的值。

Removing processes.  To delete a process, select that process in the tree and use the Del process button.

正在删除进程。若要删除流程,请在树中选择该流程并使用“删除流程”按钮。

When you select a process in the process tree, information about that process is displayed in the information panel, where you can change the process name and possibly its type. You can change a multi-threaded data node (ndbmtd) to a single-threaded data node (ndbd), or the reverse, only; no other process type changes are allowed. If you want to make a change between any other process types, you must delete the original process first, then add a new process of the desired type.

当您在流程树中选择一个流程时,有关该流程的信息将显示在信息面板中,您可以在其中更改流程名称和可能的类型。您可以将多线程数据节点(ndbmtd)更改为单线程数据节点(ndbd),也可以相反;不允许更改其他进程类型。如果要在任何其他流程类型之间进行更改,必须先删除原始流程,然后添加所需类型的新流程。

NDB Cluster Auto-Installer Define Parameters Screen

Like the Define Processes screen, this screen includes a process tree; the Define Parameters process tree is organized by process or node type, in groups labelled Management Layer, Data Layer, SQL Layer, and API Layer. An information panel displays information regarding the item currently selected. The Define Attributes screen is shown here:

与define processs屏幕一样,此屏幕包括流程树;define parameters流程树按流程或节点类型组织,分组标记为management layer、data layer、sql layer和api layer。信息面板显示有关当前选定项的信息。“定义属性”屏幕如下所示:

Figure 21.28 NDB Cluster Auto-Installer Define Parameters screen

图21.28 ndb集群自动安装程序定义参数屏幕

Content is described in the surrounding text.

The checkbox labelled Show advanced configuration, when checked, makes advanced options for data node and SQL node processes visible in the information pane. These options are set and used whether or not they are visible. You can also enable this behavior globally by checking Show advanced configuration options under Settings (see NDB Cluster Installer Settings and Help Menus).

选中“显示高级配置”复选框后,将在信息窗格中显示数据节点和SQL节点进程的高级选项。无论是否可见,都会设置和使用这些选项。也可以通过选中“设置”下的“显示高级配置选项”全局启用此行为(请参见ndb群集安装程序设置和帮助菜单)。

You can edit attributes for a single process by selecting that process from the tree, or for all processes of the same type in the cluster by selecting one of the Layer folders. A per-process value set for a given attribute overrides any per-group setting for that attribute that would otherwise apply to the process in question. An example of such an information panel (for an SQL process) is shown here:

可以通过从树中选择单个进程来编辑该进程的属性,也可以通过选择一个层文件夹来编辑群集中同一类型的所有进程的属性。为给定属性设置的每进程值将覆盖该属性的任何每组设置,否则该设置将应用于所讨论的进程。这样的信息面板(对于SQL进程)的示例如下所示:

Figure 21.29 Define Parameters—Process Attributes

图21.29定义参数过程属性

Content is described in the surrounding text.

Attributes whose values can be overridden are shown in the information panel with a button bearing a plus sign. This + button activates an input widget for the attribute, enabling you to change its value. When the value has been overridden, this button changes into a button showing an X. The X button undoes any changes made to a given attribute, which immediately reverts to the predefined value.

其值可以被重写的属性显示在信息面板中,按钮带有加号。此+按钮激活属性的输入小部件,使您能够更改其值。当值被重写时,此按钮将变为显示X的按钮。X按钮将撤消对给定属性所做的任何更改,该更改将立即还原为预定义值。

All configuration attributes have predefined values calculated by the installer, based such factors as host name, node ID, node type, and so on. In most cases, these values may be left as they are. If you are not familiar with it already, it is highly recommended that you read the applicable documentation before making changes to any of the attribute values. To make finding this information easier, each attribute name shown in the information panel is linked to its description in the online NDB Cluster documentation.

所有配置属性都具有由安装程序根据主机名、节点ID、节点类型等因素计算的预定义值。在大多数情况下,这些值可以保持原样。如果您还不熟悉它,强烈建议您在更改任何属性值之前阅读适用的文档。为了便于查找此信息,信息面板中显示的每个属性名都链接到联机ndb集群文档中的描述。

NDB Cluster Auto-Installer Deploy Configuration Screen

This screen allows you to perform the following tasks:

此屏幕允许您执行以下任务:

  • Review process startup commands and configuration files to be applied

    检查要应用的进程启动命令和配置文件

  • Distribute configuration files by creating any necessary files and directories on all cluster hosts—that is, deploy the cluster as presently configured

    通过在所有群集主机上创建任何必要的文件和目录来分发配置文件,即按当前配置部署群集

  • Start and stop the cluster

    启动和停止群集

The Deploy Configuration screen is shown here:

部署配置屏幕如下所示:

Figure 21.30 NDB Cluster Auto-Installer Deploy Configuration screen

图21.30 ndb集群自动安装程序部署配置屏幕

Content is described in the surrounding text.

Like the Define Parameters screen, this screen features a process tree which is organized by process type. Next to each process in the tree is a status icon indicating the current status of the process: connected (CONNECTED), starting (STARTING), running (STARTED), stopping (STOPPING), or disconnected (NO_CONTACT). The icon shows green if the process is connected or running; yellow if it is starting or stopping; red if the process is stopped or cannot be contacted by the management server.

与“定义参数”屏幕一样,此屏幕具有按流程类型组织的流程树。树中每个进程旁边都有一个状态图标,指示进程的当前状态:已连接(connected)、正在启动(starting)、正在运行(started)、正在停止(stopping)或已断开(no_contact)。如果进程已连接或正在运行,则图标显示为绿色;如果进程正在启动或停止,则图标显示为黄色;如果进程已停止或管理服务器无法联系,则图标显示为红色。

This screen also contains two information panels, one showing the startup command or commands needed to start the selected process. (For some processes, more than one command may be required—for example, if initialization is necessary.) The other panel shows the contents of the configuration file, if any, for the given process.

此屏幕还包含两个信息面板,一个显示启动命令或启动所选进程所需的命令。(对于某些进程,可能需要多个命令,例如,如果需要初始化。)另一个面板显示给定进程的配置文件(如果有的话)的内容。

This screen also contains four buttons, labelled as and performing the functions described in the following list:

此屏幕还包含四个按钮,标记为并执行以下列表中描述的功能:

  • Install cluster: Nonfunctional in this release; implementation intended for a future release.

    install cluster:在这个版本中不起作用;实现是为将来的版本准备的。

  • Deploy cluster: Verify that the configuration is valid. Create any directories required on the cluster hosts, and distribute the configuration files onto the hosts. A progress bar shows how far the deployment has proceeded, as shown here, and a dialog is pisplayed when the deployment has completed, as shown here:

    部署群集:验证配置是否有效。创建集群主机上所需的任何目录,并将配置文件分发到主机上。进度条显示展开的进度,如图所示,展开完成后将显示一个对话框,如图所示:

    Figure 21.31 Cluster Deployment Process

    图21.31集群部署流程

    Content is described in the surrounding text.

  • Start cluster: The cluster is deployed as with Deploy cluster, after which all cluster processes are started in the correct order.

    启动集群:集群与部署集群一样部署,之后所有集群进程都以正确的顺序启动。

    Starting these processes may take some time. If the estimated time to completion is too large, the installer provides an opportunity to cancel or to continue of the startup procedure. A progress bar indicates the current status of the startup procedure, as shown here:

    启动这些过程可能需要一些时间。如果估计的完成时间太长,安装程序将提供取消或继续启动过程的机会。进度条指示启动过程的当前状态,如下所示:

    Figure 21.32 Cluster Startup Process with Progress Bar

    图21.32带进度条的集群启动过程

    Content is described in the surrounding text.

    The process status icons next to the items shown in the process tree also update with the status of each process.

    流程树中显示的项目旁边的流程状态图标也会随每个流程的状态而更新。

    A confirmation dialog is shown when the startup process has completed, as shown here:

    启动过程完成后,将显示一个确认对话框,如下所示:

    Figure 21.33 Cluster Startup, Process Completed Dialog

    图21.33集群启动,进程完成对话框

    Content is described in the surrounding text.

  • Stop cluster: After the cluster has been started, you can stop it using this. As with starting the cluster, cluster shutdown is not instantaneous, and may require some time complete. A progress bar, similar to that displayed during cluster startup, shows the approximate current status of the cluster shutdown procedure, as do the process status icons adjoining the process tree. The progress bar is shown here:

    停止群集:群集启动后,您可以使用此命令停止它。与启动集群一样,集群关闭不是即时的,可能需要一些时间才能完成。进度条,类似于集群启动期间显示的进度条,显示了集群关机过程的近似当前状态,以及与进程树相邻的进程状态图标。进度条如下所示:

    Figure 21.34 Cluster Shutdown Process, with Progress Bar

    图21.34集群关闭过程,带进度条

    Content is described in the surrounding text.

    A confirmation dialog indicates when the shutdown process is complete:

    确认对话框指示关闭过程何时完成:

    Figure 21.35 Cluster Shutdown, Process Completed Dialog

    图21.35集群关闭,进程完成对话框

    Content is described in the surrounding text.

The Auto-Installer generates a config.ini file containing NDB node parameters for each management node, as well as a my.cnf file containing the appropriate options for each mysqld process in the cluster. No configuration files are created for data nodes or API nodes.

自动安装程序生成一个config.ini文件,其中包含每个管理节点的ndb节点参数,以及一个my.cnf文件,其中包含集群中每个mysqld进程的适当选项。没有为数据节点或api节点创建配置文件。

21.2.3 Installation of NDB Cluster on Linux

This section covers installation methods for NDB Cluster on Linux and other Unix-like operating systems. While the next few sections refer to a Linux operating system, the instructions and procedures given there should be easily adaptable to other supported Unix-like platforms. For manual installation and setup instructions specific to Windows systems, see Section 21.2.4, “Installing NDB Cluster on Windows”.

本节介绍在Linux和其他类似Unix的操作系统上安装NDB群集的方法。接下来的几节将介绍linux操作系统,其中给出的指令和过程应该很容易适应其他受支持的类unix平台。有关特定于Windows系统的手动安装和设置说明,请参阅第21.2.4节“在Windows上安装NDB群集”。

Each NDB Cluster host computer must have the correct executable programs installed. A host running an SQL node must have installed on it a MySQL Server binary (mysqld). Management nodes require the management server daemon (ndb_mgmd); data nodes require the data node daemon (ndbd or ndbmtd). It is not necessary to install the MySQL Server binary on management node hosts and data node hosts. It is recommended that you also install the management client (ndb_mgm) on the management server host.

每个ndb群集主机必须安装正确的可执行程序。运行sql节点的主机上必须安装mysql服务器二进制文件(mysqld)。管理节点需要管理服务器守护程序(ndb-mgmd);数据节点需要数据节点守护程序(ndbd或ndbmtd)。不需要在管理节点主机和数据节点主机上安装mysql服务器二进制文件。建议您也在管理服务器主机上安装管理客户端(ndb_mgm)。

Installation of NDB Cluster on Linux can be done using precompiled binaries from Oracle (downloaded as a .tar.gz archive), with RPM packages (also available from Oracle), or from source code. All three of these installation methods are described in the section that follow.

在Linux上安装ndb集群可以使用来自Oracle的预编译二进制文件(下载为.tar.gz存档)、RPM包(也可以从Oracle获得)或源代码来完成。所有这三种安装方法都将在下面的一节中介绍。

Regardless of the method used, it is still necessary following installation of the NDB Cluster binaries to create configuration files for all cluster nodes, before you can start the cluster. See Section 21.2.5, “Initial Configuration of NDB Cluster”.

无论使用何种方法,在安装ndb集群二进制文件之后,仍然需要为所有集群节点创建配置文件,然后才能启动集群。见第21.2.5节“ndb集群的初始配置”。

21.2.3.1 Installing an NDB Cluster Binary Release on Linux

This section covers the steps necessary to install the correct executables for each type of Cluster node from precompiled binaries supplied by Oracle.

本节介绍从Oracle提供的预编译二进制文件为每种类型的群集节点安装正确的可执行文件所需的步骤。

For setting up a cluster using precompiled binaries, the first step in the installation process for each cluster host is to download the binary archive from the NDB Cluster downloads page. (For the most recent 64-bit NDB 7.5 release, this is mysql-cluster-gpl-7.5.16-linux-glibc2.12-x86_64.tar.gz.) We assume that you have placed this file in each machine's /var/tmp directory.

对于使用预编译的二进制文件设置集群,每个集群主机安装过程的第一步是从ndb cluster downloads页面下载二进制存档文件。(对于最新的64位ndb 7.5版本,这是mysql-cluster-gpl-7.5.16-linux-glibc2.12-x86_64.tar.gz。)我们假设您已将此文件放在每台计算机的/var/tmp目录中。

If you require a custom binary, see Section 2.9.5, “Installing MySQL Using a Development Source Tree”.

如果需要自定义二进制文件,请参阅2.9.5节“使用开发源代码树安装mysql”。

Note

After completing the installation, do not yet start any of the binaries. We show you how to do so following the configuration of the nodes (see Section 21.2.5, “Initial Configuration of NDB Cluster”).

完成安装后,请不要启动任何二进制文件。我们将向您展示如何按照节点的配置进行配置(请参阅第21.2.5节“ndb集群的初始配置”)。

SQL nodes.  On each of the machines designated to host SQL nodes, perform the following steps as the system root user:

SQL节点。在指定托管SQL节点的每台计算机上,以系统根用户身份执行以下步骤:

  1. Check your /etc/passwd and /etc/group files (or use whatever tools are provided by your operating system for managing users and groups) to see whether there is already a mysql group and mysql user on the system. Some OS distributions create these as part of the operating system installation process. If they are not already present, create a new mysql user group, and then add a mysql user to this group:

    检查/etc/passwd和/etc/group文件(或使用操作系统提供的用于管理用户和组的任何工具),查看系统中是否已经存在mysql组和mysql用户。一些操作系统发行版将这些创建为操作系统安装过程的一部分。如果它们还不存在,请创建一个新的mysql用户组,然后将一个mysql用户添加到此组:

    shell> groupadd mysql
    shell> useradd -g mysql -s /bin/false mysql
    

    The syntax for useradd and groupadd may differ slightly on different versions of Unix, or they may have different names such as adduser and addgroup.

    在不同版本的unix上,useradd和groupadd的语法可能略有不同,或者它们可能有不同的名称,例如adduser和addgroup。

  2. Change location to the directory containing the downloaded file, unpack the archive, and create a symbolic link named mysql to the mysql directory.

    将位置更改为包含下载文件的目录,解压缩存档文件,并创建一个名为mysql的符号链接到mysql目录。

    Note

    The actual file and directory names vary according to the NDB Cluster version number.

    实际的文件名和目录名因ndb集群版本号而异。

    shell> cd /var/tmp
    shell> tar -C /usr/local -xzvf mysql-cluster-gpl-7.5.16-linux-glibc2.12-x86_64.tar.gz
    shell> ln -s /usr/local/mysql-cluster-gpl-7.5.16-linux-glibc2.12-x86_64 /usr/local/mysql
    
  3. Change location to the mysql directory and set up the system databases using mysqld --initialize as shown here:

    将位置更改为mysql目录并使用mysqld设置系统数据库--初始化如下所示:

    shell> cd mysql
    shell> mysqld --initialize
    

    This generates a random password for the MySQL root account. If you do not want the random password to be generated, you can substitute the --initialize-insecure option for --initialize. In either case, you should review Section 2.10.1, “Initializing the Data Directory”, for additional information before performing this step. See also Section 4.4.4, “mysql_secure_installation — Improve MySQL Installation Security”.

    这将为mysql根帐户生成一个随机密码。如果不希望生成随机密码,可以将--initialize-insecure选项替换为--initialize。在这两种情况下,在执行此步骤之前,您应该查看第2.10.1节“初始化数据目录”以了解更多信息。另请参阅4.4.4节,“mysql_secure_installation-improve mysql installation security”。

  4. Set the necessary permissions for the MySQL server and data directories:

    为mysql服务器和数据目录设置必要的权限:

    shell> chown -R root .
    shell> chown -R mysql data
    shell> chgrp -R mysql .
    
  5. Copy the MySQL startup script to the appropriate directory, make it executable, and set it to start when the operating system is booted up:

    将mysql启动脚本复制到适当的目录,使其可执行,并将其设置为在启动操作系统时启动:

    shell> cp support-files/mysql.server /etc/rc.d/init.d/
    shell> chmod +x /etc/rc.d/init.d/mysql.server
    shell> chkconfig --add mysql.server
    

    (The startup scripts directory may vary depending on your operating system and version—for example, in some Linux distributions, it is /etc/init.d.)

    (启动脚本目录可能因操作系统和版本而异,例如,在某些Linux发行版中,它是/etc/in it.d。)

    Here we use Red Hat's chkconfig for creating links to the startup scripts; use whatever means is appropriate for this purpose on your platform, such as update-rc.d on Debian.

    在这里,我们使用Red Hat的chkconfig创建到启动脚本的链接;在您的平台上使用任何适合于此目的的方法,例如在Debian上更新rc.d。

Remember that the preceding steps must be repeated on each machine where an SQL node is to reside.

请记住,前面的步骤必须在SQL节点所在的每台计算机上重复。

Data nodes.  Installation of the data nodes does not require the mysqld binary. Only the NDB Cluster data node executable ndbd (single-threaded) or ndbmtd (multithreaded) is required. These binaries can also be found in the .tar.gz archive. Again, we assume that you have placed this archive in /var/tmp.

数据节点。数据节点的安装不需要mysqld二进制文件。只需要ndb集群数据节点可执行文件ndbd(单线程)或ndbmtd(多线程)。这些二进制文件也可以在.tar.gz存档中找到。同样,我们假设您已经将这个归档文件放在/var/tmp中。

As system root (that is, after using sudo, su root, or your system's equivalent for temporarily assuming the system administrator account's privileges), perform the following steps to install the data node binaries on the data node hosts:

作为系统根(即,在使用sudo、su root或您的系统的等效项临时假定系统管理员帐户的权限后),请执行以下步骤在数据节点主机上安装数据节点二进制文件:

  1. Change location to the /var/tmp directory, and extract the ndbd and ndbmtd binaries from the archive into a suitable directory such as /usr/local/bin:

    将位置更改为/var/tmp目录,并从归档文件中将ndbd和ndbmtd二进制文件提取到适当的目录中,例如/usr/local/bin:

    shell> cd /var/tmp
    shell> tar -zxvf mysql-cluster-gpl-7.5.16-linux-glibc2.12-x86_64.tar.gz
    shell> cd mysql-cluster-gpl-7.5.16-linux-glibc2.12-x86_64
    shell> cp bin/ndbd /usr/local/bin/ndbd
    shell> cp bin/ndbmtd /usr/local/bin/ndbmtd
    

    (You can safely delete the directory created by unpacking the downloaded archive, and the files it contains, from /var/tmp once ndb_mgm and ndb_mgmd have been copied to the executables directory.)

    (在将ndb-mgm和ndb-mgmd复制到可执行文件目录后,可以从/var/tmp中安全地删除通过解压缩下载的存档文件创建的目录及其包含的文件。)

  2. Change location to the directory into which you copied the files, and then make both of them executable:

    将位置更改为将文件复制到的目录,然后使它们都可执行:

    shell> cd /usr/local/bin
    shell> chmod +x ndb*
    

The preceding steps should be repeated on each data node host.

应在每个数据节点主机上重复上述步骤。

Although only one of the data node executables is required to run an NDB Cluster data node, we have shown you how to install both ndbd and ndbmtd in the preceding instructions. We recommend that you do this when installing or upgrading NDB Cluster, even if you plan to use only one of them, since this will save time and trouble in the event that you later decide to change from one to the other.

尽管运行ndb集群数据节点只需要其中一个数据节点可执行文件,但在前面的说明中,我们已经向您展示了如何安装ndbd和ndbmtd。我们建议您在安装或升级ndb群集时执行此操作,即使您计划仅使用其中一个群集,因为这将节省时间,并在以后决定从一个群集更改为另一个群集时会带来麻烦。

Note

The data directory on each machine hosting a data node is /usr/local/mysql/data. This piece of information is essential when configuring the management node. (See Section 21.2.5, “Initial Configuration of NDB Cluster”.)

托管数据节点的每台计算机上的数据目录是/usr/local/mysql/data。在配置管理节点时,这条信息是必不可少的。(见第21.2.5节,“ndb集群的初始配置”。)

Management nodes.  Installation of the management node does not require the mysqld binary. Only the NDB Cluster management server (ndb_mgmd) is required; you most likely want to install the management client (ndb_mgm) as well. Both of these binaries also be found in the .tar.gz archive. Again, we assume that you have placed this archive in /var/tmp.

管理节点。安装管理节点不需要mysqld二进制文件。只需要ndb cluster management server(ndb_mgmd);您很可能也希望安装管理客户端(ndb_mgm)。这两个二进制文件也可以在.tar.gz存档中找到。同样,我们假设您已经将这个归档文件放在/var/tmp中。

As system root, perform the following steps to install ndb_mgmd and ndb_mgm on the management node host:

作为系统根目录,执行以下步骤在管理节点主机上安装ndb-mgmd和ndb-mgm:

  1. Change location to the /var/tmp directory, and extract the ndb_mgm and ndb_mgmd from the archive into a suitable directory such as /usr/local/bin:

    将位置更改为/var/tmp目录,并将ndb-mgm和ndb-mgmd从存档文件中提取到适当的目录,如/usr/local/bin:

    shell> cd /var/tmp
    shell> tar -zxvf mysql-cluster-gpl-7.5.16-linux-glibc2.12-x86_64.tar.gz
    shell> cd mysql-cluster-gpl-7.5.16-linux-glibc2.12-x86_64
    shell> cp bin/ndb_mgm* /usr/local/bin
    

    (You can safely delete the directory created by unpacking the downloaded archive, and the files it contains, from /var/tmp once ndb_mgm and ndb_mgmd have been copied to the executables directory.)

    (在将ndb-mgm和ndb-mgmd复制到可执行文件目录后,可以从/var/tmp中安全地删除通过解压缩下载的存档文件创建的目录及其包含的文件。)

  2. Change location to the directory into which you copied the files, and then make both of them executable:

    将位置更改为将文件复制到的目录,然后使它们都可执行:

    shell> cd /usr/local/bin
    shell> chmod +x ndb_mgm*
    

In Section 21.2.5, “Initial Configuration of NDB Cluster”, we create configuration files for all of the nodes in our example NDB Cluster.

在21.2.5节“ndb集群的初始配置”中,我们为示例ndb集群中的所有节点创建配置文件。

21.2.3.2 Installing NDB Cluster from RPM

This section covers the steps necessary to install the correct executables for each type of NDB Cluster node using RPM packages supplied by Oracle beginning with NDB 7.5.4. For information about RPMs for previous versions of NDB Cluster, see Installation using old-style RPMs (NDB 7.5.3 and earlier).

本节介绍使用Oracle从NDB7.5.4开始提供的RPM包为每种类型的NDB群集节点安装正确的可执行文件所需的步骤。有关以前版本的ndb集群的rpms的信息,请参阅使用旧式rpms(ndb 7.5.3和更早版本)安装。

As an alternative to the method described in this section, Oracle provides MySQL Repositories for NDB Cluster 7.5.6 and later that are compatible with many common Linux distributions. Two repostories, listed here, are available for RPM-based distributions:

作为本节所述方法的替代方法,Oracle为NDB Cluster 7.5.6和更高版本提供了与许多常见Linux发行版兼容的MySQL存储库。此处列出的两个repostory可用于基于RPM的发行版:

  • For distributions using yum or dnf, you can use the MySQL Yum Repository for NDB Cluster. See Installing MySQL NDB Cluster Using the Yum Repository, for instructions and additional information.

    对于使用yum或dnf的发行版,可以使用mysql yum repository for ndb cluster。有关说明和其他信息,请参阅使用yum存储库安装mysql ndb集群。

  • For SLES, you can use the MySQL SLES Repository for NDB Cluster. See Installing MySQL NDB Cluster Using the SLES Repository, for instructions and additional information.

    对于sles,可以使用mysql sles repository for ndb cluster。有关说明和其他信息,请参阅使用sles存储库安装mysql ndb集群。

RPMs are available for both 32-bit and 64-bit Linux platforms. The filenames for these RPMs use the following pattern:

RPM可用于32位和64位Linux平台。这些RPM的文件名使用以下模式:

mysql-cluster-community-data-node-7.5.8-1.el7.x86_64.rpm

mysql-cluster-license-component-ver-rev.distro.arch.rpm

    license:= {commercial | community}

    component: {management-server | data-node | server | client | other—see text}

    ver: major.minor.release

    rev: major[.minor]

    distro: {el6 | el7 | sles12}

    arch: {i686 | x86_64}

license indicates whether the RPM is part of a Commercial or Community release of NDB Cluster. In the remainder of this section, we assume for the examples that you are installing a Community release.

许可证指示RPM是ndb集群的商业版还是社区版的一部分。在本节的其余部分中,我们假设您正在安装社区发行版。

Possible values for component, with descriptions, can be found in the following table:

组件的可能值和说明见下表:

Table 21.5 Components of the NDB Cluster RPM distribution

表21.5 ndb集群RPM分布组件

Component Description
auto-installer NDB Cluster Auto Installer program; see Section 21.2.1, “The NDB Cluster Auto-Installer (NDB 7.5)”, for usage
client MySQL and NDB client programs; includes mysql client, ndb_mgm client, and other client tools
common Character set and error message information needed by the MySQL server
data-node ndbd and ndbmtd data node binaries
devel Headers and library files needed for MySQL client development
embedded Embedded MySQL server
embedded-compat Backwards-compatible embedded MySQL server
embedded-devel Header and library files for developing applications for embedded MySQL
java JAR files needed for support of ClusterJ applications
libs MySQL client libraries
libs-compat Backwards-compatible MySQL client libraries
management-server The NDB Cluster management server (ndb_mgmd)
memcached Files needed to support ndbmemcache
minimal-debuginfo Debug information for package server-minimal; useful when developing applications that use this package or when debugging this package
ndbclient NDB client library for running NDB API and MGM API applications (libndbclient)
ndbclient-devel Header and other files needed for developing NDB API and MGM API applications
nodejs Files needed to set up Node.JS support for NDB Cluster
server The MySQL server (mysqld) with NDB storage engine support included, and associated MySQL server programs
server-minimal Minimal installation of the MySQL server for NDB and related tools
test mysqltest, other MySQL test programs, and support files


A single bundle (.tar file) of all NDB Cluster RPMs for a given platform and architecture is also available. The name of this file follows the pattern shown here:

对于给定的平台和体系结构,所有ndb集群rpm的单个包(.tar文件)也可用。此文件的名称遵循以下模式:

mysql-cluster-license-ver-rev.distro.arch.rpm-bundle.tar

You can extract the individual RPM files from this file using tar or your preferred tool for extracting archives.

您可以使用tar或用于提取存档的首选工具从该文件中提取各个rpm文件。

The components required to install the three major types of NDB Cluster nodes are given in the following list:

下面列出了安装三种主要类型的ndb群集节点所需的组件:

  • Management node: management-server

    管理节点:管理服务器

  • Data node: data-node

    数据节点:数据节点

  • SQL node: server and common

    sql节点:server和common

In addition, the client RPM should be installed to provide the ndb_mgm management client on at least one management node. You may also wish to install it on SQL nodes, to have mysql and other MySQL client programs available on these. We discuss installation of nodes by type later in this section.

此外,应安装客户机RPM,以便在至少一个管理节点上提供ndb-mgm管理客户机。您可能还希望将其安装在sql节点上,以便在这些节点上提供mysql和其他mysql客户端程序。我们将在本节后面按类型讨论节点的安装。

ver represents the three-part NDB storage engine version number in 7.5.x format, shown as 7.5.16 in the examples. rev provides the RPM revision number in major.minor format. In the examples shown in this section, we use 1.1 for this value.

ver以7.5.x格式表示由三部分组成的ndb存储引擎版本号,如示例中的7.5.16所示。rev以主、次格式提供RPM版本号。在本节所示的示例中,我们使用1.1作为该值。

The distro (Linux distribution) is one of rhel5 (Oracle Linux 5, Red Hat Enterprise Linux 4 and 5), el6 (Oracle Linux 6, Red Hat Enterprise Linux 6), el7 (Oracle Linux 7, Red Hat Enterprise Linux 7), or sles12 (SUSE Enterprise Linux 12). For the examples in this section, we assume that the host runs Oracle Linux 7, Red Hat Enterprise Linux 7, or the equivalent (el7).

发行版(Linux发行版)是RHEL5(Oracle Linux 5、Red Hat Enterprise Linux 4和5)、EL6(Oracle Linux 6、Red Hat Enterprise Linux 6)、EL7(Oracle Linux 7、Red Hat Enterprise Linux 7)或SLES12(SUSE Enterprise Linux 12)之一。对于本节中的示例,我们假设主机运行Oracle Linux 7、Red Hat Enterprise Linux 7或等效的(EL7)。

arch is i686 for 32-bit RPMs and x86_64 for 64-bit versions. In the examples shown here, we assume a 64-bit platform.

Arch是32位RPM的i686和64位版本的x86_64。在这里显示的示例中,我们假设64位平台。

The NDB Cluster version number in the RPM file names (shown here as 7.5.16) can vary according to the version which you are actually using. It is very important that all of the Cluster RPMs to be installed have the same version number. The architecture should also be appropriate to the machine on which the RPM is to be installed; in particular, you should keep in mind that 64-bit RPMs (x86_64) cannot be used with 32-bit operating systems (use i686 for the latter).

RPM文件名中的ndb集群版本号(此处显示为7.5.16)可能会根据实际使用的版本而有所不同。所有要安装的集群rpm必须具有相同的版本号,这一点非常重要。该体系结构还应适用于要安装RPM的计算机;特别是,您应记住64位RPM(x86_64)不能与32位操作系统一起使用(后者使用i686)。

Data nodes.  On a computer that is to host an NDB Cluster data node it is necessary to install only the data-node RPM. To do so, copy this RPM to the data node host, and run the following command as the system root user, replacing the name shown for the RPM as necessary to match that of the RPM downloaded from the MySQL website:

数据节点。在要承载ndb群集数据节点的计算机上,只需要安装数据节点rpm。要执行此操作,请将此RPM复制到数据节点主机,并以系统根用户身份运行以下命令,根据需要替换RPM显示的名称,以匹配从MySQL网站下载的RPM的名称:

shell> rpm -Uhv mysql-cluster-community-data-node-7.5.16-1.el7.x86_64.rpm

This installs the ndbd and ndbmtd data node binaries in /usr/sbin. Either of these can be used to run a data node process on this host.

这将在/usr/sbin中安装ndbd和ndbmtd数据节点二进制文件。其中任何一个都可以用于在此主机上运行数据节点进程。

SQL nodes.  Copy the server and common RPMs to each machine to be used for hosting an NDB Cluster SQL node (server requires common). Install the server RPM by executing the following command as the system root user, replacing the name shown for the RPM as necessary to match the name of the RPM downloaded from the MySQL website:

SQL节点。将服务器和公共RPM复制到用于托管NDB群集SQL节点的每台计算机(服务器要求公共)。以系统根用户身份执行以下命令以安装服务器RPM,根据需要替换所示的RPM名称,以匹配从MySQL网站下载的RPM的名称:

shell> rpm -Uhv mysql-cluster-community-server-7.5.16-1.el7.x86_64.rpm

This installs the MySQL server binary (mysqld), with NDB storage engine support, in the /usr/sbin directory. It also installs all needed MySQL Server support files and useful MySQL server programs, including the mysql.server and mysqld_safe startup scripts (in /usr/share/mysql and /usr/bin, respectively). The RPM installer should take care of general configuration issues (such as creating the mysql user and group, if needed) automatically.

这将在/usr/sbin目录中安装支持ndb存储引擎的mysql服务器二进制文件(mysqld)。它还安装所有需要的mysql服务器支持文件和有用的mysql服务器程序,包括mysql.server和mysqld_安全启动脚本(分别位于/usr/share/mysql和/usr/bin中)。RPM安装程序应该自动处理一般的配置问题(如需要创建MySQL用户和组)。

Important

You must use the versions of these RPMs released for NDB Cluster ; those released for the standard MySQL server do not provide support for the NDB storage engine.

您必须使用这些为ndb集群发布的rpms版本;那些为标准mysql服务器发布的rpms不支持ndb存储引擎。

To administer the SQL node (MySQL server), you should also install the client RPM, as shown here:

要管理SQL节点(MySQL服务器),还应安装客户机RPM,如下所示:

shell> rpm -Uhv mysql-cluster-community-client-7.5.16-1.el7.x86_64.rpm

This installs the mysql client and other MySQL client programs, such as mysqladmin and mysqldump, to /usr/bin.

这会将mysql客户端和其他mysql客户端程序(如mysqladmin和mysqldump)安装到/usr/bin。

Management nodes.  To install the NDB Cluster management server, it is necessary only to use the management-server RPM. Copy this RPM to the computer intended to host the management node, and then install it by running the following command as the system root user (replace the name shown for the RPM as necessary to match that of the management-server RPM downloaded from the MySQL website):

管理节点。要安装ndb群集管理服务器,只需使用管理服务器rpm。将此RPM复制到要承载管理节点的计算机上,然后以系统根用户身份运行以下命令进行安装(根据需要替换所示的RPM名称,以匹配从MySQL网站下载的管理服务器RPM的名称):

shell> rpm -Uhv mysql-cluster-commercial-management-server-7.5.16-1.el7.x86_64.rpm

This RPM installs the management server binary ndb_mgmd in the /usr/sbin directory. While this is the only program actually required for running a management node, it is also a good idea to have the ndb_mgm NDB Cluster management client available as well. You can obtain this program, as well as other NDB client programs such as ndb_desc and ndb_config, by installing the client RPM as described previously.

此RPM在/usr/sbin目录中安装管理服务器二进制ndb_mgmd。虽然这是运行管理节点实际所需的唯一程序,但也可以使用ndb-mgm ndb群集管理客户端。如前所述,通过安装客户机RPM,可以获得此程序以及其他ndb客户机程序,如ndb_desc和ndb_config。

Note

Previously, ndb_mgm was installed by the same RPM used to install the management server. In NDB 7.5.4 and later, all NDB client programs are obtained from the same client RPM that installs mysql and other MySQL clients.

以前,ndb_-mgm的安装速度与安装管理服务器的速度相同。在ndb 7.5.4和更高版本中,所有ndb客户端程序都是从安装mysql和其他mysql客户端的同一个客户端rpm获得的。

See Section 2.5.5, “Installing MySQL on Linux Using RPM Packages from Oracle”, for general information about installing MySQL using RPMs supplied by Oracle.

有关使用Oracle提供的RPM安装MySQL的一般信息,请参阅2.5.5节,“使用Oracle提供的RPM包在Linux上安装MySQL”。

After installing from RPM, you still need to configure the cluster; see Section 21.2.5, “Initial Configuration of NDB Cluster”, for the relevant information.

从RPM安装之后,您仍然需要配置集群;有关信息,请参阅21.2.5节“ndb集群的初始配置”。

Installation using old-style RPMs (NDB 7.5.3 and earlier).  The information in the remainder of this section applies only to NDB 7.5.3 and earlier, and provides the steps necessary to install the correct executables for each type of NDB Cluster node using old-style RPM packages as supplied by Oracle prior to NDB 7.5.4. The filenames for these old-style RPMs use the following pattern:

使用旧式rpms(ndb 7.5.3及更早版本)安装。本节其余部分中的信息仅适用于NDB 7.5.3和更早版本,并提供了使用Oracle在NDB 7.5.4之前提供的“旧式”RPM包为每种类型的NDB群集节点安装正确可执行文件所需的步骤。这些“旧式”RPM的文件名使用以下模式:

MySQL-Cluster-component-producttype-ndbversion-revision.distribution.architecture.rpm

component:= {server | client [| other]}

producttype:= {gpl | advanced}

ndbversion:= major.minor.release

distribution:= {sles11 | rhel5 | el6}

architecture:= {i386 | x86_64}

The component can be server or client. (Other values are possible, but since only the server and client components are required for a working NDB Cluster installation, we do not discuss them here.) The producttype for Community RPMs downloaded from https://dev.mysql.com/downloads/cluster/ is always gpl; advanced is used to indicate commercial releases. ndbversion represents the three-part NDB storage engine version number in 7.5.x format; we use 7.5.3 throughout the rest of this section. The RPM revision is shown as 1 in the examples following. The distribution can be one of sles12 (SUSE Enterprise Linux 12), rhel6 (Oracle Linux 6, Red Hat Enterprise Linux 6), or el7 (Oracle Linux 7, Red Hat Enterprise Linux 7). The architecture is i386 for 32-bit RPMs and x86_64 for 64-bit versions.

组件可以是服务器或客户端。(其他值是可能的,但由于正常运行的ndb集群安装只需要服务器和客户端组件,因此此处不讨论这些值。)从https://dev.mysql.com/downloads/cluster/下载的社区rpm的producttype始终是gpl;advanced用于指示商业版本。ndb version以7.5.x格式表示由三部分组成的ndb存储引擎版本号;我们在本节的其余部分使用7.5.3。在下面的示例中,RPM版本显示为1。发行版可以是SLES12(SUSE Enterprise Linux 12)、RHEL6(Oracle Linux 6、Red Hat Enterprise Linux 6)或EL7(Oracle Linux 7、Red Hat Enterprise Linux 7)之一。32位RPM的体系结构是i386,64位版本是x86_64。

For an NDB Cluster, one and possibly two RPMs are required:

对于ndb集群,需要一个甚至可能两个rpm:

  • The server RPM (for example, MySQL-Cluster-server-gpl-7.5.3-1.sles11.i386.rpm), which supplies the core files needed to run a MySQL Server with NDBCLUSTER storage engine support (that is, as an NDB Cluster SQL node) as well as all NDB Cluster executables, including the management node, data node, and ndb_mgm client binaries. This RPM is always required for installing NDB Cluster.

    服务器RPM(例如mysql-cluster-server-gpl-7.5.3-1.sles11.i386.rpm),它为运行MySQL服务器所需的核心文件提供了NdbCluster存储引擎支持(即作为一个NDB群集SQL节点)以及所有的NDB群集可执行文件,包括管理节点、数据节点和ndb-mgm客户端二进制文件。安装ndb群集始终需要此转速。

  • If you do not have your own client application capable of administering a MySQL server, you should also obtain and install the client RPM (for example, MySQL-Cluster-client-gpl-7.5.3-1.sles11.i386.rpm), which supplies the mysql client

    如果您没有能够管理mysql服务器的客户机应用程序,那么还应该获取并安装客户机rpm(例如mysql-cluster-client-gpl-7.5.3-1.sles11.i386.rpm),它提供mysql客户机

It is very important that all of the Cluster RPMs to be installed have the same version number. The architecture designation should also be appropriate to the machine on which the RPM is to be installed; in particular, you should keep in mind that 64-bit RPMs cannot be used with 32-bit operating systems.

所有要安装的集群rpm必须具有相同的版本号,这一点非常重要。体系结构名称也应适用于要安装RPM的计算机;特别是,您应记住64位RPM不能与32位操作系统一起使用。

Data nodes.  On a computer that is to host a cluster data node it is necessary to install only the server RPM. To do so, copy this RPM to the data node host, and run the following command as the system root user, replacing the name shown for the RPM as necessary to match that of the RPM downloaded from the MySQL website:

数据节点。在要承载群集数据节点的计算机上,只需要安装服务器RPM。要执行此操作,请将此RPM复制到数据节点主机,并以系统根用户身份运行以下命令,根据需要替换RPM显示的名称,以匹配从MySQL网站下载的RPM的名称:

shell> rpm -Uhv MySQL-Cluster-server-gpl-7.5.16-1.sles11.i386.rpm

Although this installs all NDB Cluster binaries, only the program ndbd or ndbmtd (both in /usr/sbin) is actually needed to run an NDB Cluster data node.

尽管这会安装所有ndb集群二进制文件,但实际上运行ndb集群数据节点只需要程序ndbd或ndbmtd(两者都在/usr/sbin中)。

SQL nodes.  On each machine to be used for hosting a cluster SQL node, install the server RPM by executing the following command as the system root user, replacing the name shown for the RPM as necessary to match the name of the RPM downloaded from the MySQL website:

SQL节点。在用于托管群集SQL节点的每台计算机上,以系统根用户身份执行以下命令以安装服务器RPM,必要时替换RPM显示的名称,以匹配从MySQL网站下载的RPM的名称:

shell> rpm -Uhv MySQL-Cluster-server-gpl-7.5.16-1.sles11.i386.rpm

This installs the MySQL server binary (mysqld) with NDB storage engine support in the /usr/sbin directory, as well as all needed MySQL Server support files. It also installs the mysql.server and mysqld_safe startup scripts (in /usr/share/mysql and /usr/bin, respectively). The RPM installer should take care of general configuration issues (such as creating the mysql user and group, if needed) automatically.

这将在/usr/sbin目录中安装支持ndb存储引擎的mysql服务器二进制文件(mysqld),以及所有需要的mysql服务器支持文件。它还安装mysql.server和mysqld_安全启动脚本(分别位于/usr/share/mysql和/usr/bin中)。RPM安装程序应该自动处理一般的配置问题(如需要创建MySQL用户和组)。

To administer the SQL node (MySQL server), you should also install the client RPM, as shown here:

要管理SQL节点(MySQL服务器),还应安装客户机RPM,如下所示:

shell> rpm -Uhv MySQL-Cluster-client-gpl-7.5.16-1.sles11.i386.rpm

This installs the mysql client program.

这将安装mysql客户端程序。

Management nodes.  To install the NDB Cluster management server, it is necessary only to use the server RPM. Copy this RPM to the computer intended to host the management node, and then install it by running the following command as the system root user (replace the name shown for the RPM as necessary to match that of the server RPM downloaded from the MySQL website):

管理节点。要安装ndb群集管理服务器,只需要使用服务器rpm。将此RPM复制到要承载管理节点的计算机上,然后以系统根用户身份运行以下命令进行安装(根据需要替换所示的RPM名称,以匹配从MySQL网站下载的服务器RPM的名称):

shell> rpm -Uhv MySQL-Cluster-server-gpl-7.3.27-1.sles11.i386.rpm

Although this RPM installs many other files, only the management server binary ndb_mgmd (in the /usr/sbin directory) is actually required for running a management node. The server RPM also installs ndb_mgm, the NDB management client.

尽管此RPM安装了许多其他文件,但运行管理节点实际上只需要管理服务器二进制ndb_mgmd(在/usr/sbin目录中)。服务器RPM还安装ndb-mgm,ndb管理客户端。

See Section 2.5.5, “Installing MySQL on Linux Using RPM Packages from Oracle”, for general information about installing MySQL using RPMs supplied by Oracle. See Section 21.2.5, “Initial Configuration of NDB Cluster”, for information about required post-installation configuration.

有关使用Oracle提供的RPM安装MySQL的一般信息,请参阅2.5.5节,“使用Oracle提供的RPM包在Linux上安装MySQL”。有关所需安装后配置的信息,请参阅第21.2.5节“ndb集群的初始配置”。

21.2.3.3 Installing NDB Cluster Using .deb Files

The section provides information about installing NDB Cluster on Debian and related Linux distributions such Ubuntu using the .deb files supplied by Oracle for this purpose.

本节提供有关在debian和相关linux发行版(如ubuntu)上安装ndb集群的信息,这些发行版使用oracle为此目的提供的.deb文件。

For NDB Cluster 7.5.6 and later, Oracle also provides an APT repository for Debian and other distributions. See Installing MySQL NDB Cluster Using the APT Repository, for instructions and additional information.

对于ndb cluster 7.5.6和更高版本,oracle还为debian和其他发行版提供了apt存储库。有关说明和其他信息,请参阅使用apt存储库安装mysql ndb集群。

Oracle provides .deb installer files for NDB Cluster 7.5 for 32-bit and 64-bit platforms. For a Debian-based system, only a single installer file is necessary. This file is named using the pattern shown here, according to the applicable NDB Cluster version, Debian version, and architecture:

Oracle为32位和64位平台的NDB群集7.5提供.deb安装程序文件。对于基于debian的系统,只需要一个安装文件。根据适用的ndb集群版本、debian版本和体系结构,使用此处显示的模式命名此文件:

mysql-cluster-gpl-ndbver-debiandebianver-arch.deb

Here, ndbver is the 3-part NDB engine version number, debianver is the major version of Debian (8 or 9), and arch is one of i686 or x86_64. In the examples that follow, we assume you wish to install NDB 7.5.16 on a 64-bit Debian 9 system; in this case, the installer file is named mysql-cluster-gpl-7.5.16-debian9-x86_64.deb-bundle.tar.

这里,ndbver是由3部分组成的ndb引擎版本号,debianver是debian(8或9)的主要版本,arch是i686或x86_之一。在下面的示例中,我们假设您希望在64位debian 9系统上安装ndb 7.5.16;在本例中,安装程序文件名为mysql-cluster-gpl-7.5.16-debian9-x86_64.deb-bundle.tar。

Once you have downloaded the appropriate .deb file, you can untar it, and then install it from the command line using dpkg, like this:

下载了相应的.deb文件后,可以将其解压,然后使用dpkg从命令行安装,如下所示:

shell> dpkg -i mysql-cluster-gpl-7.5.16-debian9-i686.deb

You can also remove it using dpkg as shown here:

也可以使用dpkg将其移除,如下所示:

shell> dpkg -r mysql

The installer file should also be compatible with most graphical package managers that work with .deb files, such as GDebi for the Gnome desktop.

安装程序文件还应该与大多数使用.deb文件的图形化包管理器兼容,例如用于gnome桌面的gdebi。

The .deb file installs NDB Cluster under /opt/mysql/server-version/, where version is the 2-part release series version for the included MySQL server. For NDB 7.5, this is always 5.7. The directory layout is the same as that for the generic Linux binary distribution (see Table 2.3, “MySQL Installation Layout for Generic Unix/Linux Binary Package”), with the exception that startup scripts and configuration files are found in support-files instead of share. All NDB Cluster executables, such as ndb_mgm, ndbd, and ndb_mgmd, are placed in the bin directory.

.deb文件在/opt/mysql/server version/下安装ndb集群,其中version是包含的mysql服务器的两部分发行版系列版本。对于ndb 7.5,这总是5.7。目录布局与通用Linux二进制发行版相同(请参见表2.3“通用UNIX/Linux二进制包的MySQL安装布局”),但启动脚本和配置文件位于支持文件而不是共享文件中。所有ndb集群可执行文件,如ndb-mgm、ndbd和ndb-mgmd,都放在bin目录中。

21.2.3.4 Building NDB Cluster from Source on Linux

This section provides information about compiling NDB Cluster on Linux and other Unix-like platforms. Building NDB Cluster from source is similar to building the standard MySQL Server, although it differs in a few key respects discussed here. For general information about building MySQL from source, see Section 2.9, “Installing MySQL from Source”. For information about compiling NDB Cluster on Windows platforms, see Section 21.2.4.2, “Compiling and Installing NDB Cluster from Source on Windows”.

本节提供有关在Linux和其他类似Unix的平台上编译NDB群集的信息。从源代码构建ndb集群类似于构建标准mysql服务器,尽管在这里讨论的几个关键方面有所不同。有关从源代码构建mysql的一般信息,请参阅2.9节“从源代码安装mysql”。有关在windows平台上编译ndb群集的信息,请参阅21.2.4.2节,“在windows上从源代码编译和安装ndb群集”。

Building NDB Cluster requires using the NDB Cluster sources. These are available from the NDB Cluster downloads page at https://dev.mysql.com/downloads/cluster/. The archived source file should have a name similar to mysql-cluster-gpl-7.5.16.tar.gz. You can also obtain NDB Cluster sources from GitHub at https://github.com/mysql/mysql-server/tree/cluster-7.5 (NDB 7.5) and https://github.com/mysql/mysql-server/tree/cluster-7.6 (NDB 7.6). Building NDB Cluster 7.5 or 7.6 from standard MySQL Server 5.7 sources is not supported.

构建ndb集群需要使用ndb集群源。这些可从位于https://dev.mysql.com/downloads/cluster/的ndb cluster下载页面获得。存档的源文件的名称应该类似于mysql-cluster-gpl-7.5.16.tar.gz。您还可以从github获取ndb集群源,网址为https://github.com/mysql/mysql server/tree/cluster-7.5(ndb 7.5)和https://github.com/mysql/mysql server/tree/cluster-7.6(ndb 7.6)。不支持从标准MySQL Server 5.7源构建NDB群集7.5或7.6。

The WITH_NDBCLUSTER_STORAGE_ENGINE option for CMake causes the binaries for the management nodes, data nodes, and other NDB Cluster programs to be built; it also causes mysqld to be compiled with NDB storage engine support. This option (or its alias WITH_NDBCLUSTER) is required when building NDB Cluster.

cmake的with-ndb cluster-storage-engine选项将生成管理节点、数据节点和其他ndb集群程序的二进制文件;它还将使用ndb存储引擎支持编译mysqld。生成ndb群集时需要此选项(或其别名为“ndb cluster”)。

Important

The WITH_NDB_JAVA option is enabled by default. This means that, by default, if CMake cannot find the location of Java on your system, the configuration process fails; if you do not wish to enable Java and ClusterJ support, you must indicate this explicitly by configuring the build using -DWITH_NDB_JAVA=OFF. Use WITH_CLASSPATH to provide the Java classpath if needed.

默认情况下,“With_ndb_Java”选项处于启用状态。这意味着,默认情况下,如果cmake无法在您的系统上找到java的位置,配置过程将失败;如果您不希望启用java和clusterj支持,则必须使用-dwith_ndb_java=off显式地配置构建。如果需要,可以使用with_classpath来提供Java类路径。

For more information about CMake options specific to building NDB Cluster, see Options for Compiling NDB Cluster.

有关特定于生成ndb群集的cmake选项的详细信息,请参阅编译ndb群集的选项。

After you have run make && make install (or your system's equivalent), the result is similar to what is obtained by unpacking a precompiled binary to the same location.

运行make&&make install(或系统的等效程序)后,结果类似于将预编译的二进制文件解压到同一位置所获得的结果。

Management nodes.  When building from source and running the default make install, the management server and management client binaries (ndb_mgmd and ndb_mgm) can be found in /usr/local/mysql/bin. Only ndb_mgmd is required to be present on a management node host; however, it is also a good idea to have ndb_mgm present on the same host machine. Neither of these executables requires a specific location on the host machine's file system.

管理节点。从源代码构建并运行默认make install时,可以在/usr/local/mysql/bin中找到管理服务器和管理客户端二进制文件(ndb_mgmd和ndb_mgm)。管理节点主机上只需要有ndb-mgmd;但是,在同一台主机上有ndb-mgm也是一个好主意。这两个可执行文件都不需要主机文件系统上的特定位置。

Data nodes.  The only executable required on a data node host is the data node binary ndbd or ndbmtd. (mysqld, for example, does not have to be present on the host machine.) By default, when building from source, this file is placed in the directory /usr/local/mysql/bin. For installing on multiple data node hosts, only ndbd or ndbmtd need be copied to the other host machine or machines. (This assumes that all data node hosts use the same architecture and operating system; otherwise you may need to compile separately for each different platform.) The data node binary need not be in any particular location on the host's file system, as long as the location is known.

数据节点。数据节点主机上唯一需要的可执行文件是数据节点二进制ndbd或ndbmtd。(例如,mysqld不必出现在主机上。)默认情况下,从源代码生成时,该文件会放在/usr/local/mysql/bin目录中。要在多个数据节点主机上安装,只需将ndbd或ndbmtd复制到另一台或多台主机上。(这假设所有数据节点主机使用相同的体系结构和操作系统;否则,可能需要为每个不同的平台分别编译。)只要位置已知,数据节点二进制文件就不必位于主机文件系统中的任何特定位置。

When compiling NDB Cluster from source, no special options are required for building multithreaded data node binaries. Configuring the build with NDB storage engine support causes ndbmtd to be built automatically; make install places the ndbmtd binary in the installation bin directory along with mysqld, ndbd, and ndb_mgm.

从源代码处编译ndb集群时,构建多线程数据节点二进制文件不需要特殊选项。使用ndb存储引擎支持配置生成将自动生成ndbmtd;make install将ndbmtd二进制文件与mysqld、ndbd和ndb-mgm一起放置在installation bin目录中。

SQL nodes.  If you compile MySQL with clustering support, and perform the default installation (using make install as the system root user), mysqld is placed in /usr/local/mysql/bin. Follow the steps given in Section 2.9, “Installing MySQL from Source” to make mysqld ready for use. If you want to run multiple SQL nodes, you can use a copy of the same mysqld executable and its associated support files on several machines. The easiest way to do this is to copy the entire /usr/local/mysql directory and all directories and files contained within it to the other SQL node host or hosts, then repeat the steps from Section 2.9, “Installing MySQL from Source” on each machine. If you configure the build with a nondefault PREFIX option, you must adjust the directory accordingly.

SQL节点。如果使用集群支持编译mysql,并执行默认安装(使用make install作为系统根用户),mysqld将放在/usr/local/mysql/bin中。按照2.9节“从源代码安装mysql”中给出的步骤,使mysqld可以使用。如果要运行多个sql节点,可以在多台计算机上使用同一mysqld可执行文件及其相关支持文件的副本。最简单的方法是将整个/usr/local/mysql目录及其包含的所有目录和文件复制到另一个或多个sql节点主机,然后在每台计算机上重复第2.9节“从源安装mysql”中的步骤。如果使用非默认前缀选项配置生成,则必须相应地调整目录。

In Section 21.2.5, “Initial Configuration of NDB Cluster”, we create configuration files for all of the nodes in our example NDB Cluster.

在21.2.5节“ndb集群的初始配置”中,我们为示例ndb集群中的所有节点创建配置文件。

21.2.4 Installing NDB Cluster on Windows

This section describes installation procedures for NDB Cluster on Windows hosts. NDB Cluster 7.5 binaries for Windows can be obtained from https://dev.mysql.com/downloads/cluster/. For information about installing NDB Cluster on Windows from a binary release provided by Oracle, see Section 21.2.4.1, “Installing NDB Cluster on Windows from a Binary Release”.

本节介绍Windows主机上NDB群集的安装过程。可从https://dev.mysql.com/downloads/cluster/获得适用于windows的ndb cluster 7.5二进制文件。有关从Oracle提供的二进制版本在Windows上安装NDB群集的信息,请参阅第21.2.4.1节“从二进制版本在Windows上安装NDB群集”。

It is also possible to compile and install NDB Cluster from source on Windows using Microsoft Visual Studio. For more information, see Section 21.2.4.2, “Compiling and Installing NDB Cluster from Source on Windows”.

也可以使用Microsoft Visual Studio从源代码在Windows上编译和安装NDB群集。有关详细信息,请参阅21.2.4.2节,“在Windows上从源代码编译和安装NDB群集”。

21.2.4.1 Installing NDB Cluster on Windows from a Binary Release

This section describes a basic installation of NDB Cluster on Windows using a binary no-install NDB Cluster release provided by Oracle, using the same 4-node setup outlined in the beginning of this section (see Section 21.2, “NDB Cluster Installation”), as shown in the following table:

本节介绍在Windows上使用Oracle提供的二进制“不安装”NDB群集版本在Windows上安装NDB群集的基本过程,使用本节开头部分概述的相同4节点设置(请参阅第21.2节“NDB群集安装”),如下表所示:

Table 21.6 Network addresses of nodes in example cluster

表21.6示例集群中节点的网络地址

Node IP Address
Management node (mgmd) 198.51.100.10
SQL node (mysqld) 198.51.100.20
Data node "A" (ndbd) 198.51.100.30
Data node "B" (ndbd) 198.51.100.40

As on other platforms, the NDB Cluster host computer running an SQL node must have installed on it a MySQL Server binary (mysqld.exe). You should also have the MySQL client (mysql.exe) on this host. For management nodes and data nodes, it is not necessary to install the MySQL Server binary; however, each management node requires the management server daemon (ndb_mgmd.exe); each data node requires the data node daemon (ndbd.exe or ndbmtd.exe). For this example, we refer to ndbd.exe as the data node executable, but you can install ndbmtd.exe, the multithreaded version of this program, instead, in exactly the same way. You should also install the management client (ndb_mgm.exe) on the management server host. This section covers the steps necessary to install the correct Windows binaries for each type of NDB Cluster node.

与其他平台一样,运行SQL节点的NDB群集主机必须在其上安装MySQL服务器二进制文件(mysqld.exe)。在这个主机上还应该有mysql客户端(mysql.exe)。对于管理节点和数据节点,不需要安装mysql服务器二进制文件;但是,每个管理节点都需要管理服务器守护程序(ndb_mgmd.exe);每个数据节点都需要数据节点守护程序(ndbd.exe或ndbmtd.exe)。对于本例,我们将ndbd.exe称为数据节点可执行文件,但是您可以用完全相同的方式安装该程序的多线程版本ndbmtd.exe。您还应该在管理服务器主机上安装管理客户端(ndb_mgm.exe)。本节介绍为每种类型的ndb群集节点安装正确的windows二进制文件所需的步骤。

Note

As with other Windows programs, NDB Cluster executables are named with the .exe file extension. However, it is not necessary to include the .exe extension when invoking these programs from the command line. Therefore, we often simply refer to these programs in this documentation as mysqld, mysql, ndb_mgmd, and so on. You should understand that, whether we refer (for example) to mysqld or mysqld.exe, either name means the same thing (the MySQL Server program).

与其他windows程序一样,ndb群集可执行文件使用.exe文件扩展名命名。但是,从命令行调用这些程序时,不必包含.exe扩展名。因此,在本文档中,我们通常简单地将这些程序称为mysqld、mysql、ndb_mgmd等。您应该明白,无论我们(例如)是指mysqld还是mysqld.exe,任何一个名称都意味着相同的东西(mysql服务器程序)。

For setting up an NDB Cluster using Oracles's no-install binaries, the first step in the installation process is to download the latest NDB Cluster Windows ZIP binary archive from https://dev.mysql.com/downloads/cluster/. This archive has a filename of the mysql-cluster-gpl-ver-winarch.zip, where ver is the NDB storage engine version (such as 7.5.16), and arch is the architecture (32 for 32-bit binaries, and 64 for 64-bit binaries). For example, the NDB Cluster 7.5.16 archive for 64-bit Windows systems is named mysql-cluster-gpl-7.5.16-win64.zip.

要使用oracles的no install二进制文件设置ndb集群,安装过程的第一步是从https://dev.mysql.com/download s/cluster/下载最新的ndb集群windows zip二进制文件。此存档文件的文件名为mysql-cluster-gpl-ver-winarch.zip,其中ver是ndb存储引擎版本(如7.5.16),arch是体系结构(32位二进制文件为32,64位二进制文件为64)。例如,64位windows系统的ndb cluster 7.5.16存档文件名为mysql-cluster-gpl-7.5.16-win64.zip。

You can run 32-bit NDB Cluster binaries on both 32-bit and 64-bit versions of Windows; however, 64-bit NDB Cluster binaries can be used only on 64-bit versions of Windows. If you are using a 32-bit version of Windows on a computer that has a 64-bit CPU, then you must use the 32-bit NDB Cluster binaries.

可以在32位和64位版本的Windows上运行32位ndb群集二进制文件;但是,64位ndb群集二进制文件只能在64位版本的Windows上使用。如果在具有64位CPU的计算机上使用32位版本的Windows,则必须使用32位NDB群集二进制文件。

To minimize the number of files that need to be downloaded from the Internet or copied between machines, we start with the computer where you intend to run the SQL node.

要将需要从Internet下载或在计算机之间复制的文件数量降至最低,我们将从要运行SQL节点的计算机开始。

SQL node.  We assume that you have placed a copy of the archive in the directory C:\Documents and Settings\username\My Documents\Downloads on the computer having the IP address 198.51.100.20, where username is the name of the current user. (You can obtain this name using ECHO %USERNAME% on the command line.) To install and run NDB Cluster executables as Windows services, this user should be a member of the Administrators group.

SQL节点。我们假设您已将存档文件的副本放在IP地址为198.51.100.20的计算机上的目录c:\ documents and settings\user name\my documents\downloads中,其中username是当前用户的名称。(可以使用命令行上的echo%user name%获取此名称。)若要将ndb群集可执行文件作为windows服务安装和运行,此用户应是administrators组的成员。

Extract all the files from the archive. The Extraction Wizard integrated with Windows Explorer is adequate for this task. (If you use a different archive program, be sure that it extracts all files and directories from the archive, and that it preserves the archive's directory structure.) When you are asked for a destination directory, enter C:\, which causes the Extraction Wizard to extract the archive to the directory C:\mysql-cluster-gpl-ver-winarch. Rename this directory to C:\mysql.

从档案中提取所有文件。与Windows资源管理器集成的提取向导足以完成此任务。(如果使用其他存档程序,请确保它从存档中提取所有文件和目录,并保留存档的目录结构。)当要求您输入目标目录时,请输入c:\,这将导致提取向导将存档提取到目录c:\ mysql cluster gpl ver winarch。将此目录重命名为c:\ mysql。

It is possible to install the NDB Cluster binaries to directories other than C:\mysql\bin; however, if you do so, you must modify the paths shown in this procedure accordingly. In particular, if the MySQL Server (SQL node) binary is installed to a location other than C:\mysql or C:\Program Files\MySQL\MySQL Server 5.7, or if the SQL node's data directory is in a location other than C:\mysql\data or C:\Program Files\MySQL\MySQL Server 5.7\data, extra configuration options must be used on the command line or added to the my.ini or my.cnf file when starting the SQL node. For more information about configuring a MySQL Server to run in a nonstandard location, see Section 2.3.4, “Installing MySQL on Microsoft Windows Using a noinstall ZIP Archive”.

可以将ndb集群二进制文件安装到c:\ mysql\bin以外的目录;但是,如果这样做,则必须相应地修改此过程中显示的路径。特别是,如果mysql server(sql node)二进制文件安装在c:\ mysql或c:\ program files\mysql\mysql server 5.7以外的位置,或者sql node的数据目录位于c:\ mysql\data或c:\ program files\mysql\mysql server 5.7\data以外的位置,启动SQL节点时,必须在命令行上使用额外的配置选项,或将其添加到my.ini或my.cnf文件中。有关将mysql服务器配置为在非标准位置运行的详细信息,请参阅2.3.4节“使用noinstall zip存档在Microsoft Windows上安装mysql”。

For a MySQL Server with NDB Cluster support to run as part of an NDB Cluster, it must be started with the options --ndbcluster and --ndb-connectstring. While you can specify these options on the command line, it is usually more convenient to place them in an option file. To do this, create a new text file in Notepad or another text editor. Enter the following configuration information into this file:

要使支持ndb群集的mysql服务器作为ndb群集的一部分运行,必须使用选项--ndb cluster和--ndb connectstring启动它。虽然可以在命令行上指定这些选项,但通常将它们放在选项文件中更方便。为此,请在记事本或其他文本编辑器中创建新的文本文件。在此文件中输入以下配置信息:

[mysqld]
# Options for mysqld process:
ndbcluster                       # run NDB storage engine
ndb-connectstring=198.51.100.10  # location of management server

You can add other options used by this MySQL Server if desired (see Section 2.3.4.2, “Creating an Option File”), but the file must contain the options shown, at a minimum. Save this file as C:\mysql\my.ini. This completes the installation and setup for the SQL node.

如果需要,您可以添加此mysql服务器使用的其他选项(请参阅2.3.4.2节,“创建选项文件”),但该文件必须至少包含所示的选项。将此文件另存为c:\mysql\my.ini。这将完成SQL节点的安装和设置。

Data nodes.  An NDB Cluster data node on a Windows host requires only a single executable, one of either ndbd.exe or ndbmtd.exe. For this example, we assume that you are using ndbd.exe, but the same instructions apply when using ndbmtd.exe. On each computer where you wish to run a data node (the computers having the IP addresses 198.51.100.30 and 198.51.100.40), create the directories C:\mysql, C:\mysql\bin, and C:\mysql\cluster-data; then, on the computer where you downloaded and extracted the no-install archive, locate ndbd.exe in the C:\mysql\bin directory. Copy this file to the C:\mysql\bin directory on each of the two data node hosts.

数据节点。windows主机上的ndb群集数据节点只需要一个可执行文件,即ndbd.exe或ndbmtd.exe之一。对于本例,我们假设您使用的是ndbd.exe,但使用ndbmtd.exe时也适用相同的指令。在要运行数据节点的每台计算机(IP地址分别为198.51.100.30和198.51.100.40的计算机)上,创建目录C:\ mysql、C:\ mysql\bin和C:\ mysql\cluster data;然后,在下载并提取无安装存档的计算机上,在C:\ mysql\bin目录中找到ndbd.exe。将此文件复制到两个数据节点主机上的c:\ mysql\bin目录。

To function as part of an NDB Cluster, each data node must be given the address or hostname of the management server. You can supply this information on the command line using the --ndb-connectstring or -c option when starting each data node process. However, it is usually preferable to put this information in an option file. To do this, create a new text file in Notepad or another text editor and enter the following text:

要作为ndb集群的一部分工作,必须为每个数据节点提供管理服务器的地址或主机名。启动每个数据节点进程时,可以在命令行上使用--ndb connectstring或-c选项提供此信息。但是,通常最好将此信息放在选项文件中。为此,请在记事本或其他文本编辑器中创建新的文本文件,然后输入以下文本:

[mysql_cluster]
# Options for data node process:
ndb-connectstring=198.51.100.10  # location of management server

Save this file as C:\mysql\my.ini on the data node host. Create another text file containing the same information and save it on as C:mysql\my.ini on the other data node host, or copy the my.ini file from the first data node host to the second one, making sure to place the copy in the second data node's C:\mysql directory. Both data node hosts are now ready to be used in the NDB Cluster, which leaves only the management node to be installed and configured.

将此文件另存为数据节点主机上的c:\mysql\my.ini。创建另一个包含相同信息的文本文件,并在另一个数据节点主机上将其保存为c:mysql\my.ini,或者将my.ini文件从第一个数据节点主机复制到第二个数据节点主机,确保将副本放在第二个数据节点的c:\ mysql目录中。这两个数据节点主机现在都可以在ndb集群中使用,这只留下要安装和配置的管理节点。

Management node.  The only executable program required on a computer used for hosting an NDB Cluster management node is the management server program ndb_mgmd.exe. However, in order to administer the NDB Cluster once it has been started, you should also install the NDB Cluster management client program ndb_mgm.exe on the same machine as the management server. Locate these two programs on the machine where you downloaded and extracted the no-install archive; this should be the directory C:\mysql\bin on the SQL node host. Create the directory C:\mysql\bin on the computer having the IP address 198.51.100.10, then copy both programs to this directory.

管理节点。用于承载ndb群集管理节点的计算机上所需的唯一可执行程序是管理服务器程序ndb_mgmd.exe。但是,为了在ndb群集启动后对其进行管理,还应在与管理服务器相同的计算机上安装ndb cluster management client program ndb_mgm.exe。在下载并提取no-install存档文件的计算机上找到这两个程序;这应该是sql节点主机上的目录c:\mysql\bin。在IP地址为198.51.100.10的计算机上创建目录c:\mysql\bin,然后将两个程序复制到此目录。

You should now create two configuration files for use by ndb_mgmd.exe:

现在应该创建两个配置文件供ndb_mgmd.exe使用:

  1. A local configuration file to supply configuration data specific to the management node itself. Typically, this file needs only to supply the location of the NDB Cluster global configuration file (see item 2).

    提供特定于管理节点本身的配置数据的本地配置文件。通常,此文件只需要提供ndb集群全局配置文件的位置(请参阅项2)。

    To create this file, start a new text file in Notepad or another text editor, and enter the following information:

    要创建此文件,请在记事本或其他文本编辑器中启动新文本文件,然后输入以下信息:

    [mysql_cluster]
    # Options for management node process
    config-file=C:/mysql/bin/config.ini
    

    Save this file as the text file C:\mysql\bin\my.ini.

    将此文件另存为文本文件c:\mysql\bin\my.ini。

  2. A global configuration file from which the management node can obtain configuration information governing the NDB Cluster as a whole. At a minimum, this file must contain a section for each node in the NDB Cluster, and the IP addresses or hostnames for the management node and all data nodes (HostName configuration parameter). It is also advisable to include the following additional information:

    一种全局配置文件,管理节点可以从中获取控制整个ndb集群的配置信息。该文件至少必须包含ndb集群中每个节点的节,以及管理节点和所有数据节点的ip地址或主机名(主机名配置参数)。还建议包括以下附加信息:

    • The IP address or hostname of any SQL nodes

      任何SQL节点的IP地址或主机名

    • The data memory and index memory allocated to each data node (DataMemory and IndexMemory configuration parameters)

      分配给每个数据节点的数据内存和索引内存(data memory和indexmemory配置参数)

    • The number of replicas, using the NoOfReplicas configuration parameter (see Section 21.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”)

      使用noofreplicas配置参数的副本数量(请参阅第21.1.2节“ndb集群节点、节点组、副本和分区”)。

    • The directory where each data node stores it data and log file, and the directory where the management node keeps its log files (in both cases, the DataDir configuration parameter)

      每个数据节点存储其数据和日志文件的目录,以及管理节点保存其日志文件的目录(在这两种情况下,都是datadir配置参数)

    Create a new text file using a text editor such as Notepad, and input the following information:

    使用文本编辑器(如记事本)创建新的文本文件,并输入以下信息:

    [ndbd default]
    # Options affecting ndbd processes on all data nodes:
    NoOfReplicas=2                      # Number of replicas
    DataDir=C:/mysql/cluster-data       # Directory for each data node's data files
                                        # Forward slashes used in directory path,
                                        # rather than backslashes. This is correct;
                                        # see Important note in text
    DataMemory=80M    # Memory allocated to data storage
    IndexMemory=18M   # Memory allocated to index storage
                      # For DataMemory and IndexMemory, we have used the
                      # default values. Since the "world" database takes up
                      # only about 500KB, this should be more than enough for
                      # this example Cluster setup.
    
    [ndb_mgmd]
    # Management process options:
    HostName=198.51.100.10              # Hostname or IP address of management node
    DataDir=C:/mysql/bin/cluster-logs   # Directory for management node log files
    
    [ndbd]
    # Options for data node "A":
                                    # (one [ndbd] section per data node)
    HostName=198.51.100.30          # Hostname or IP address
    
    [ndbd]
    # Options for data node "B":
    HostName=198.51.100.40          # Hostname or IP address
    
    [mysqld]
    # SQL node options:
    HostName=198.51.100.20          # Hostname or IP address
    

    Save this file as the text file C:\mysql\bin\config.ini.

    将此文件另存为文本文件c:\mysql\bin\config.ini。

Important

A single backslash character (\) cannot be used when specifying directory paths in program options or configuration files used by NDB Cluster on Windows. Instead, you must either escape each backslash character with a second backslash (\\), or replace the backslash with a forward slash character (/). For example, the following line from the [ndb_mgmd] section of an NDB Cluster config.ini file does not work:

在Windows上的NDB群集使用的程序选项或配置文件中指定目录路径时,不能使用单个反斜杠字符(\)。相反,必须用第二个反斜杠(\\)转义每个反斜杠字符,或者用正斜杠字符(/)替换反斜杠。例如,来自ndb cluster config.ini文件的[ndb_mgmd]部分的以下行不起作用:

DataDir=C:\mysql\bin\cluster-logs

Instead, you may use either of the following:

相反,您可以使用以下任一选项:

DataDir=C:\\mysql\\bin\\cluster-logs  # Escaped backslashes
DataDir=C:/mysql/bin/cluster-logs     # Forward slashes

For reasons of brevity and legibility, we recommend that you use forward slashes in directory paths used in NDB Cluster program options and configuration files on Windows.

为了简洁易读,我们建议您在ndb集群程序选项和windows配置文件中使用的目录路径中使用正斜杠。

21.2.4.2 Compiling and Installing NDB Cluster from Source on Windows

Oracle provides precompiled NDB Cluster binaries for Windows which should be adequate for most users. However, if you wish, it is also possible to compile NDB Cluster for Windows from source code. The procedure for doing this is almost identical to the procedure used to compile the standard MySQL Server binaries for Windows, and uses the same tools. However, there are two major differences:

oracle为windows提供了预编译的ndb集群二进制文件,这对于大多数用户来说应该是足够的。但是,如果您愿意,也可以从源代码编译windows的ndb集群。执行此操作的过程与用于编译Windows标准MySQL服务器二进制文件的过程几乎相同,并且使用相同的工具。但是,有两个主要区别:

  • Building NDB Cluster requires using the NDB Cluster sources. These are available from the NDB Cluster downloads page at https://dev.mysql.com/downloads/cluster/. The archived source file should have a name similar to mysql-cluster-gpl-7.5.16.tar.gz. You can also obtain NDB Cluster sources from GitHub at https://github.com/mysql/mysql-server/tree/cluster-7.5 (NDB 7.5) and https://github.com/mysql/mysql-server/tree/cluster-7.6 (NDB 7.6). Building NDB Cluster 7.5 or 7.6 from standard MySQL Server 5.7 sources is not supported.

    构建ndb集群需要使用ndb集群源。这些可从位于https://dev.mysql.com/downloads/cluster/的ndb cluster下载页面获得。存档的源文件的名称应该类似于mysql-cluster-gpl-7.5.16.tar.gz。您还可以从github获取ndb集群源,网址为https://github.com/mysql/mysql server/tree/cluster-7.5(ndb 7.5)和https://github.com/mysql/mysql server/tree/cluster-7.6(ndb 7.6)。不支持从标准MySQL Server 5.7源构建NDB群集7.5或7.6。

  • You must configure the build using the WITH_NDBCLUSTER_STORAGE_ENGINE or WITH_NDBCLUSTER option in addition to any other build options you wish to use with CMake. (WITH_NDBCLUSTER is supported as an alias for WITH_NDBCLUSTER_STORAGE_ENGINE, and works in exactly the same way.)

    除了要与cmake一起使用的任何其他生成选项外,还必须使用with-ndbcluster存储引擎或with-ndbcluster选项配置生成。(with ndbcluster支持作为with ndbcluster存储引擎的别名,工作方式完全相同。)

Important

The WITH_NDB_JAVA option is enabled by default. This means that, by default, if CMake cannot find the location of Java on your system, the configuration process fails; if you do not wish to enable Java and ClusterJ support, you must indicate this explicitly by configuring the build using -DWITH_NDB_JAVA=OFF. (Bug #12379735) Use WITH_CLASSPATH to provide the Java classpath if needed.

默认情况下,“With_ndb_Java”选项处于启用状态。这意味着,默认情况下,如果cmake无法在您的系统上找到java的位置,配置过程将失败;如果您不希望启用java和clusterj支持,则必须使用-dwith_ndb_java=off显式地配置构建。(bug 12379735)如果需要,使用with_classpath来提供Java类路径。

For more information about CMake options specific to building NDB Cluster, see Options for Compiling NDB Cluster.

有关特定于生成ndb群集的cmake选项的详细信息,请参阅编译ndb群集的选项。

Once the build process is complete, you can create a Zip archive containing the compiled binaries; Section 2.9.4, “Installing MySQL Using a Standard Source Distribution” provides the commands needed to perform this task on Windows systems. The NDB Cluster binaries can be found in the bin directory of the resulting archive, which is equivalent to the no-install archive, and which can be installed and configured in the same manner. For more information, see Section 21.2.4.1, “Installing NDB Cluster on Windows from a Binary Release”.

构建过程完成后,您可以创建包含编译后的二进制文件的zip存档;第2.9.4节“使用标准源发行版安装mysql”提供了在windows系统上执行此任务所需的命令。ndb集群二进制文件可以在生成的归档文件的bin目录中找到,该目录相当于无安装归档文件,并且可以以相同的方式安装和配置。有关详细信息,请参阅21.2.4.1节,“从二进制版本在Windows上安装NDB群集”。

21.2.4.3 Initial Startup of NDB Cluster on Windows

Once the NDB Cluster executables and needed configuration files are in place, performing an initial start of the cluster is simply a matter of starting the NDB Cluster executables for all nodes in the cluster. Each cluster node process must be started separately, and on the host computer where it resides. The management node should be started first, followed by the data nodes, and then finally by any SQL nodes.

一旦ndb集群可执行文件和所需的配置文件就位,执行集群的初始启动只需为集群中的所有节点启动ndb集群可执行文件。每个群集节点进程必须单独启动,并在其所在的主机上启动。首先启动管理节点,然后启动数据节点,最后启动任何sql节点。

  1. On the management node host, issue the following command from the command line to start the management node process. The output should appear similar to what is shown here:

    在管理节点主机上,从命令行发出以下命令以启动管理节点进程。输出应类似于此处所示:

    C:\mysql\bin> ndb_mgmd
    2010-06-23 07:53:34 [MgmtSrvr] INFO -- NDB Cluster Management Server. mysql-5.7.28-ndb-7.5.16
    2010-06-23 07:53:34 [MgmtSrvr] INFO -- Reading cluster configuration from 'config.ini'
    

    The management node process continues to print logging output to the console. This is normal, because the management node is not running as a Windows service. (If you have used NDB Cluster on a Unix-like platform such as Linux, you may notice that the management node's default behavior in this regard on Windows is effectively the opposite of its behavior on Unix systems, where it runs by default as a Unix daemon process. This behavior is also true of NDB Cluster data node processes running on Windows.) For this reason, do not close the window in which ndb_mgmd.exe is running; doing so kills the management node process. (See Section 21.2.4.4, “Installing NDB Cluster Processes as Windows Services”, where we show how to install and run NDB Cluster processes as Windows services.)

    管理节点进程继续将日志记录输出打印到控制台。这是正常的,因为管理节点不是作为Windows服务运行的。(如果在类似unix的平台(如linux)上使用了ndb集群,您可能会注意到,在windows上,管理节点在这方面的默认行为实际上与其在unix系统上的行为相反,在unix系统上,管理节点默认作为unix守护进程运行。此行为也适用于在Windows上运行的NDB群集数据节点进程。)因此,不要关闭运行ndb_mgmd.exe的窗口;这样做会终止管理节点进程。(请参阅21.2.4.4节,“将ndb群集进程安装为windows服务”,我们将在其中演示如何将ndb群集进程安装为windows服务并运行)。

    The required -f option tells the management node where to find the global configuration file (config.ini). The long form of this option is --config-file.

    必需的-f选项告诉管理节点在哪里可以找到全局配置文件(config.ini)。这个选项的长形式是--config file。

    Important

    An NDB Cluster management node caches the configuration data that it reads from config.ini; once it has created a configuration cache, it ignores the config.ini file on subsequent starts unless forced to do otherwise. This means that, if the management node fails to start due to an error in this file, you must make the management node re-read config.ini after you have corrected any errors in it. You can do this by starting ndb_mgmd.exe with the --reload or --initial option on the command line. Either of these options works to refresh the configuration cache.

    ndb cluster management节点缓存从config.ini读取的配置数据;一旦创建了配置缓存,它将在随后的启动时忽略config.ini文件,除非强制执行其他操作。这意味着,如果管理节点由于此文件中的错误而无法启动,则必须在更正其中的任何错误后使管理节点重新读取config.ini。可以通过在命令行上使用--reload或--initial选项启动ndb_mgmd.exe来完成此操作。这些选项中的任何一个都可以刷新配置缓存。

    It is not necessary or advisable to use either of these options in the management node's my.ini file.

    在管理节点的my.ini文件中使用这两个选项中的任何一个都是不必要或不可取的。

    For additional information about options which can be used with ndb_mgmd, see Section 21.4.4, “ndb_mgmd — The NDB Cluster Management Server Daemon”, as well as Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

    有关可与ndb_mgmd一起使用的选项的其他信息,请参阅第21.4.4节“ndb_mgmd-ndb群集管理服务器守护程序”和第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

  2. On each of the data node hosts, run the command shown here to start the data node processes:

    在每个数据节点主机上,运行以下命令以启动数据节点进程:

    C:\mysql\bin> ndbd
    2010-06-23 07:53:46 [ndbd] INFO -- Configuration fetched from 'localhost:1186', generation: 1
    

    In each case, the first line of output from the data node process should resemble what is shown in the preceding example, and is followed by additional lines of logging output. As with the management node process, this is normal, because the data node is not running as a Windows service. For this reason, do not close the console window in which the data node process is running; doing so kills ndbd.exe. (For more information, see Section 21.2.4.4, “Installing NDB Cluster Processes as Windows Services”.)

    在每种情况下,数据节点进程的第一行输出都应该类似于前面的示例中所示的内容,然后是额外的日志输出行。与管理节点进程一样,这是正常的,因为数据节点不是作为windows服务运行的。因此,不要关闭运行数据节点进程的控制台窗口;这样做会杀死ndbd.exe。(有关详细信息,请参阅21.2.4.4节,“将ndb群集进程安装为windows服务”。)

  3. Do not start the SQL node yet; it cannot connect to the cluster until the data nodes have finished starting, which may take some time. Instead, in a new console window on the management node host, start the NDB Cluster management client ndb_mgm.exe, which should be in C:\mysql\bin on the management node host. (Do not try to re-use the console window where ndb_mgmd.exe is running by typing CTRL+C, as this kills the management node.) The resulting output should look like this:

    不要启动sql节点;在数据节点完成启动之前,它无法连接到集群,这可能需要一些时间。相反,在管理节点主机上的一个新控制台窗口中,启动ndb cluster management client ndb_mgm.exe,它应该位于管理节点主机上的c:\mysql\bin中。(不要尝试通过键入ctrl+c重新使用运行ndb_mgmd.exe的控制台窗口,因为这会杀死管理节点。)生成的输出应如下所示:

    C:\mysql\bin> ndb_mgm
    -- NDB Cluster -- Management Client --
    ndb_mgm>
    

    When the prompt ndb_mgm> appears, this indicates that the management client is ready to receive NDB Cluster management commands. You can observe the status of the data nodes as they start by entering ALL STATUS at the management client prompt. This command causes a running report of the data nodes's startup sequence, which should look something like this:

    当出现提示ndb_mgm>时,这表示管理客户端已准备好接收ndb群集管理命令。您可以通过在管理客户端提示符下输入所有状态来观察数据节点的状态。此命令将生成数据节点启动序列的运行报告,该报告应如下所示:

    ndb_mgm> ALL STATUS
    Connected to Management Server at: localhost:1186
    Node 2: starting (Last completed phase 3) (mysql-5.7.28-ndb-7.5.16)
    Node 3: starting (Last completed phase 3) (mysql-5.7.28-ndb-7.5.16)
    
    Node 2: starting (Last completed phase 4) (mysql-5.7.28-ndb-7.5.16)
    Node 3: starting (Last completed phase 4) (mysql-5.7.28-ndb-7.5.16)
    
    Node 2: Started (version 7.5.16)
    Node 3: Started (version 7.5.16)
    
    ndb_mgm>
    
    Note

    Commands issued in the management client are not case-sensitive; we use uppercase as the canonical form of these commands, but you are not required to observe this convention when inputting them into the ndb_mgm client. For more information, see Section 21.5.2, “Commands in the NDB Cluster Management Client”.

    在管理客户机中发出的命令不区分大小写;我们使用大写作为这些命令的规范格式,但是在将它们输入到ndb-mgm客户机中时,不需要遵守此约定。有关更多信息,请参阅21.5.2节,“ndb群集管理客户端中的命令”。

    The output produced by ALL STATUS is likely to vary from what is shown here, according to the speed at which the data nodes are able to start, the release version number of the NDB Cluster software you are using, and other factors. What is significant is that, when you see that both data nodes have started, you are ready to start the SQL node.

    根据数据节点能够启动的速度、所使用的ndb集群软件的版本号和其他因素,由所有状态生成的输出可能与此处显示的不同。重要的是,当您看到两个数据节点都已启动时,就可以启动sql节点了。

    You can leave ndb_mgm.exe running; it has no negative impact on the performance of the NDB Cluster, and we use it in the next step to verify that the SQL node is connected to the cluster after you have started it.

    您可以让ndb_mgm.exe继续运行;它不会对ndb群集的性能产生负面影响,我们将在下一步中使用它来验证启动后sql节点是否已连接到群集。

  4. On the computer designated as the SQL node host, open a console window and navigate to the directory where you unpacked the NDB Cluster binaries (if you are following our example, this is C:\mysql\bin).

    在指定为SQL节点主机的计算机上,打开控制台窗口并导航到解压缩NDB群集二进制文件的目录(如果您遵循我们的示例,这是c:\ mysql\bin)。

    Start the SQL node by invoking mysqld.exe from the command line, as shown here:

    从命令行调用mysqld.exe启动SQL节点,如下所示:

    C:\mysql\bin> mysqld --console
    

    The --console option causes logging information to be written to the console, which can be helpful in the event of problems. (Once you are satisfied that the SQL node is running in a satisfactory manner, you can stop it and restart it out without the --console option, so that logging is performed normally.)

    --console选项会将日志信息写入控制台,这在出现问题时很有帮助。(一旦您确信SQL节点以令人满意的方式运行,就可以停止它并在不使用--console选项的情况下重新启动它,以便正常执行日志记录。)

    In the console window where the management client (ndb_mgm.exe) is running on the management node host, enter the SHOW command, which should produce output similar to what is shown here:

    在管理节点主机上运行管理客户端(ndb_mgm.exe)的控制台窗口中,输入show命令,该命令应生成类似于此处所示的输出:

    ndb_mgm> SHOW
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=2    @198.51.100.30  (Version: 5.7.28-ndb-7.5.16, Nodegroup: 0, *)
    id=3    @198.51.100.40  (Version: 5.7.28-ndb-7.5.16, Nodegroup: 0)
    
    [ndb_mgmd(MGM)] 1 node(s)
    id=1    @198.51.100.10  (Version: 5.7.28-ndb-7.5.16)
    
    [mysqld(API)]   1 node(s)
    id=4    @198.51.100.20  (Version: 5.7.28-ndb-7.5.16)
    

    You can also verify that the SQL node is connected to the NDB Cluster in the mysql client (mysql.exe) using the SHOW ENGINE NDB STATUS statement.

    还可以使用show engine ndb status语句验证sql节点是否连接到mysql客户机(mysql.exe)中的ndb集群。

You should now be ready to work with database objects and data using NDB Cluster 's NDBCLUSTER storage engine. See Section 21.2.7, “NDB Cluster Example with Tables and Data”, for more information and examples.

现在应该可以使用ndb集群的ndb cluster存储引擎处理数据库对象和数据了。有关更多信息和示例,请参阅21.2.7节,“具有表和数据的ndb集群示例”。

You can also install ndb_mgmd.exe, ndbd.exe, and ndbmtd.exe as Windows services. For information on how to do this, see Section 21.2.4.4, “Installing NDB Cluster Processes as Windows Services”).

也可以将ndb_mgmd.exe、ndbd.exe和ndbmtd.exe安装为Windows服务。有关如何执行此操作的信息,请参阅21.2.4.4节,“将ndb群集进程安装为windows服务”)。

21.2.4.4 Installing NDB Cluster Processes as Windows Services

Once you are satisfied that NDB Cluster is running as desired, you can install the management nodes and data nodes as Windows services, so that these processes are started and stopped automatically whenever Windows is started or stopped. This also makes it possible to control these processes from the command line with the appropriate SC START and SC STOP commands, or using the Windows graphical Services utility. NET START and NET STOP commands can also be used.

一旦您确信ndb集群正在按需要运行,就可以将管理节点和数据节点安装为windows服务,以便在windows启动或停止时自动启动和停止这些进程。这也使得使用适当的sc start和sc stop命令或使用windows图形服务实用程序从命令行控制这些进程成为可能。也可以使用net start和net stop命令。

Installing programs as Windows services usually must be done using an account that has Administrator rights on the system.

作为windows服务安装程序通常必须使用对系统具有管理员权限的帐户来完成。

To install the management node as a service on Windows, invoke ndb_mgmd.exe from the command line on the machine hosting the management node, using the --install option, as shown here:

要将管理节点作为服务安装到Windows上,请使用--install选项从托管管理节点的计算机上的命令行调用ndb_mgmd.exe,如下所示:

C:\> C:\mysql\bin\ndb_mgmd.exe --install
Installing service 'NDB Cluster Management Server'
  as '"C:\mysql\bin\ndbd.exe" "--service=ndb_mgmd"'
Service successfully installed.
Important

When installing an NDB Cluster program as a Windows service, you should always specify the complete path; otherwise the service installation may fail with the error The system cannot find the file specified.

将ndb群集程序作为windows服务安装时,应始终指定完整路径;否则,服务安装可能会失败,并出现错误:系统找不到指定的文件。

The --install option must be used first, ahead of any other options that might be specified for ndb_mgmd.exe. However, it is preferable to specify such options in an options file instead. If your options file is not in one of the default locations as shown in the output of ndb_mgmd.exe --help, you can specify the location using the --config-file option.

必须先使用--install选项,然后再使用可能为ndb_mgmd.exe指定的任何其他选项。但是,最好在选项文件中指定此类选项。如果选项文件不在ndb_mgmd.exe--help的输出中显示的默认位置之一,则可以使用--config file选项指定位置。

Now you should be able to start and stop the management server like this:

现在您应该能够像这样启动和停止管理服务器:

C:\> SC START ndb_mgmd

C:\> SC STOP ndb_mgmd
Note

If using NET commands, you can also start or stop the management server as a Windows service using the descriptive name, as shown here:

如果使用net命令,还可以使用描述性名称作为windows服务启动或停止管理服务器,如下所示:

C:\> NET START 'NDB Cluster Management Server'
The NDB Cluster Management Server service is starting.
The NDB Cluster Management Server service was started successfully.

C:\> NET STOP  'NDB Cluster Management Server'
The NDB Cluster Management Server service is stopping..
The NDB Cluster Management Server service was stopped successfully.

It is usually simpler to specify a short service name or to permit the default service name to be used when installing the service, and then reference that name when starting or stopping the service. To specify a service name other than ndb_mgmd, append it to the --install option, as shown in this example:

通常更简单的做法是指定一个短服务名或允许在安装服务时使用默认服务名,然后在启动或停止服务时引用该名称。要指定ndb_mgmd以外的服务名,请将其附加到--install选项,如本例所示:

C:\> C:\mysql\bin\ndb_mgmd.exe --install=mgmd1
Installing service 'NDB Cluster Management Server'
  as '"C:\mysql\bin\ndb_mgmd.exe" "--service=mgmd1"'
Service successfully installed.

Now you should be able to start or stop the service using the name you have specified, like this:

现在您应该可以使用指定的名称启动或停止服务,如下所示:

C:\> SC START mgmd1

C:\> SC STOP mgmd1

To remove the management node service, use SC DELETE service_name:

要删除管理节点服务,请使用sc delete service_name:

C:\> SC DELETE mgmd1

Alternatively, invoke ndb_mgmd.exe with the --remove option, as shown here:

或者,使用--remove选项调用ndb_mgmd.exe,如下所示:

C:\> C:\mysql\bin\ndb_mgmd.exe --remove
Removing service 'NDB Cluster Management Server'
Service successfully removed.

If you installed the service using a service name other than the default, pass the service name as the value of the ndb_mgmd.exe --remove option, like this:

如果使用默认名称以外的服务名称安装服务,请将服务名称作为ndb_mgmd.exe--remove选项的值传递,如下所示:

C:\> C:\mysql\bin\ndb_mgmd.exe --remove=mgmd1
Removing service 'mgmd1'
Service successfully removed.

Installation of an NDB Cluster data node process as a Windows service can be done in a similar fashion, using the --install option for ndbd.exe (or ndbmtd.exe), as shown here:

可以使用ndbd.exe(或ndbmtd.exe)的--install选项以类似的方式将ndb群集数据节点进程安装为windows服务,如下所示:

C:\> C:\mysql\bin\ndbd.exe --install
Installing service 'NDB Cluster Data Node Daemon' as '"C:\mysql\bin\ndbd.exe" "--service=ndbd"'
Service successfully installed.

Now you can start or stop the data node as shown in the following example:

现在您可以启动或停止数据节点,如下例所示:

C:\> SC START ndbd

C:\> SC STOP ndbd

To remove the data node service, use SC DELETE service_name:

要删除数据节点服务,请使用sc delete service_name:

C:\> SC DELETE ndbd

Alternatively, invoke ndbd.exe with the --remove option, as shown here:

或者,使用--remove选项调用ndbd.exe,如下所示:

C:\> C:\mysql\bin\ndbd.exe --remove
Removing service 'NDB Cluster Data Node Daemon'
Service successfully removed.

As with ndb_mgmd.exe (and mysqld.exe), when installing ndbd.exe as a Windows service, you can also specify a name for the service as the value of --install, and then use it when starting or stopping the service, like this:

与ndb_mgmd.exe(和mysqld.exe)一样,在将ndbd.exe作为Windows服务安装时,也可以将服务的名称指定为--install的值,然后在启动或停止服务时使用它,如下所示:

C:\> C:\mysql\bin\ndbd.exe --install=dnode1
Installing service 'dnode1' as '"C:\mysql\bin\ndbd.exe" "--service=dnode1"'
Service successfully installed.

C:\> SC START dnode1

C:\> SC STOP dnode1

If you specified a service name when installing the data node service, you can use this name when removing it as well, as shown here:

如果在安装数据节点服务时指定了服务名称,则在删除时也可以使用此名称,如下所示:

C:\> SC DELETE dnode1

Alternatively, you can pass the service name as the value of the ndbd.exe --remove option, as shown here:

或者,可以将服务名作为ndbd.exe--remove选项的值传递,如下所示:

C:\> C:\mysql\bin\ndbd.exe --remove=dnode1
Removing service 'dnode1'
Service successfully removed.

Installation of the SQL node as a Windows service, starting the service, stopping the service, and removing the service are done in a similar fashion, using mysqld --install, SC START, SC STOP, and SC DELETE (or mysqld --remove). NET commands can also be used to start or stop a service. For additional information, see Section 2.3.4.8, “Starting MySQL as a Windows Service”.

使用mysqld-install、sc-start、sc-stop和sc-delete(或mysqld-remove)以类似的方式安装sql节点作为windows服务、启动服务、停止服务和删除服务。net命令也可用于启动或停止服务。有关更多信息,请参阅2.3.4.8节,“将mysql作为windows服务启动”。

21.2.5 Initial Configuration of NDB Cluster

In this section, we discuss manual configuration of an installed NDB Cluster by creating and editing configuration files.

在本节中,我们将通过创建和编辑配置文件来讨论已安装的ndb集群的手动配置。

NDB Cluster also provides a GUI installer which can be used to perform the configuration without the need to edit text files in a separate application. For more information, see Section 21.2.1, “The NDB Cluster Auto-Installer (NDB 7.5)”.

ndb集群还提供了一个gui安装程序,可以用来执行配置,而无需在单独的应用程序中编辑文本文件。有关更多信息,请参阅21.2.1节,“ndb群集自动安装程序(ndb 7.5)”。

For our four-node, four-host NDB Cluster (see Cluster nodes and host computers), it is necessary to write four configuration files, one per node host.

对于我们的四节点、四主机ndb集群(请参阅集群节点和主机),需要编写四个配置文件,每个节点主机一个。

  • Each data node or SQL node requires a my.cnf file that provides two pieces of information: a connection string that tells the node where to find the management node, and a line telling the MySQL server on this host (the machine hosting the data node) to enable the NDBCLUSTER storage engine.

    每个数据节点或SQL节点都需要一个my.cnf文件,该文件提供两条信息:一条连接字符串告诉节点在何处找到管理节点,另一条线告诉此主机(托管数据节点的计算机)上的MySQL服务器启用NdbCluster存储引擎。

    For more information on connection strings, see Section 21.3.3.3, “NDB Cluster Connection Strings”.

    有关连接字符串的详细信息,请参阅第21.3.3.3节“ndb集群连接字符串”。

  • The management node needs a config.ini file telling it how many replicas to maintain, how much memory to allocate for data and indexes on each data node, where to find the data nodes, where to save data to disk on each data node, and where to find any SQL nodes.

    管理节点需要一个config.ini文件,告诉它要维护多少副本、为每个数据节点上的数据和索引分配多少内存、在哪里找到数据节点、在哪里将数据保存到每个数据节点上的磁盘以及在哪里找到任何SQL节点。

Configuring the data nodes and SQL nodes.  The my.cnf file needed for the data nodes is fairly simple. The configuration file should be located in the /etc directory and can be edited using any text editor. (Create the file if it does not exist.) For example:

配置数据节点和SQL节点。数据节点所需的my.cnf文件相当简单。配置文件应该位于/etc目录中,并且可以使用任何文本编辑器进行编辑。(如果文件不存在,则创建文件)。

shell> vi /etc/my.cnf
Note

We show vi being used here to create the file, but any text editor should work just as well.

我们在这里展示了vi用于创建文件,但是任何文本编辑器都应该可以正常工作。

For each data node and SQL node in our example setup, my.cnf should look like this:

对于示例设置中的每个数据节点和SQL节点,my.cnf应该如下所示:

[mysqld]
# Options for mysqld process:
ndbcluster                      # run NDB storage engine

[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=198.51.100.10  # location of management server

After entering the preceding information, save this file and exit the text editor. Do this for the machines hosting data node A, data node B, and the SQL node.

输入前面的信息后,保存该文件并退出文本编辑器。对托管数据节点“A”、数据节点“B”和SQL节点的计算机执行此操作。

Important

Once you have started a mysqld process with the ndbcluster and ndb-connectstring parameters in the [mysqld] and [mysql_cluster] sections of the my.cnf file as shown previously, you cannot execute any CREATE TABLE or ALTER TABLE statements without having actually started the cluster. Otherwise, these statements will fail with an error. This is by design.

使用my.cnf文件的[mysqld]和[mysql_cluster]部分中的ndbcluster和ndb connectstring参数启动mysqld进程后,如果没有实际启动群集,就无法执行任何create table或alter table语句。否则,这些语句将失败并出现错误。这是故意的。

Configuring the management node.  The first step in configuring the management node is to create the directory in which the configuration file can be found and then to create the file itself. For example (running as root):

正在配置管理节点。配置管理节点的第一步是创建可以在其中找到配置文件的目录,然后创建文件本身。例如(以根用户身份运行):

shell> mkdir /var/lib/mysql-cluster
shell> cd /var/lib/mysql-cluster
shell> vi config.ini

For our representative setup, the config.ini file should read as follows:

对于我们的代表性设置,config.ini文件应如下所示:

[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2    # Number of replicas
DataMemory=80M    # How much memory to allocate for data storage
IndexMemory=18M   # How much memory to allocate for index storage
                  # For DataMemory and IndexMemory, we have used the
                  # default values. Since the "world" database takes up
                  # only about 500KB, this should be more than enough for
                  # this example NDB Cluster setup.
                  # NOTE: IndexMemory is deprecated in NDB 7.6 and later; in
                  # these versions, resources for all data and indexes are
                  # allocated by DataMemory and any that are set for IndexMemory
                  # are added to the DataMemory resource pool
ServerPort=2202   # This the default value; however, you can use any
                  # port that is free for all the hosts in the cluster
                  # Note1: It is recommended that you do not specify the port
                  # number at all and simply allow the default value to be used
                  # instead
                  # Note2: The port was formerly specified using the PortNumber
                  # TCP parameter; this parameter is no longer available in NDB
                  # Cluster 7.5.

[ndb_mgmd]
# Management process options:
HostName=198.51.100.10          # Hostname or IP address of MGM node
DataDir=/var/lib/mysql-cluster  # Directory for MGM node log files

[ndbd]
# Options for data node "A":
                                # (one [ndbd] section per data node)
HostName=198.51.100.30          # Hostname or IP address
NodeId=2                        # Node ID for this data node
DataDir=/usr/local/mysql/data   # Directory for this data node's data files

[ndbd]
# Options for data node "B":
HostName=198.51.100.40          # Hostname or IP address
NodeId=3                        # Node ID for this data node
DataDir=/usr/local/mysql/data   # Directory for this data node's data files

[mysqld]
# SQL node options:
HostName=198.51.100.20          # Hostname or IP address
                                # (additional mysqld connections can be
                                # specified for this node for various
                                # purposes such as running ndb_restore)
Note

The world database can be downloaded from https://dev.mysql.com/doc/index-other.html.

世界数据库可以从https://dev.mysql.com/doc/index-other.html下载。

After all the configuration files have been created and these minimal options have been specified, you are ready to proceed with starting the cluster and verifying that all processes are running. We discuss how this is done in Section 21.2.6, “Initial Startup of NDB Cluster”.

创建完所有配置文件并指定了这些最小选项后,就可以继续启动集群并验证所有进程是否正在运行。我们在第21.2.6节“ndb集群的初始启动”中讨论了如何做到这一点。

For more detailed information about the available NDB Cluster configuration parameters and their uses, see Section 21.3.3, “NDB Cluster Configuration Files”, and Section 21.3, “Configuration of NDB Cluster”. For configuration of NDB Cluster as relates to making backups, see Section 21.5.3.3, “Configuration for NDB Cluster Backups”.

有关可用的ndb群集配置参数及其用途的详细信息,请参阅第21.3.3节“ndb群集配置文件”和第21.3节“ndb群集配置”。有关与备份相关的ndb群集配置,请参阅21.5.3.3节“ndb群集备份配置”。

Note

The default port for Cluster management nodes is 1186; the default port for data nodes is 2202. However, the cluster can automatically allocate ports for data nodes from those that are already free.

群集管理节点的默认端口为1186;数据节点的默认端口为2202。但是,集群可以自动为那些已经空闲的数据节点分配端口。

21.2.6 Initial Startup of NDB Cluster

Starting the cluster is not very difficult after it has been configured. Each cluster node process must be started separately, and on the host where it resides. The management node should be started first, followed by the data nodes, and then finally by any SQL nodes:

在配置集群之后,启动集群并不十分困难。每个群集节点进程必须单独启动,并在其所在的主机上启动。首先启动管理节点,然后启动数据节点,最后启动任何SQL节点:

  1. On the management host, issue the following command from the system shell to start the management node process:

    在管理主机上,从系统shell发出以下命令以启动管理节点进程:

    shell> ndb_mgmd -f /var/lib/mysql-cluster/config.ini
    

    The first time that it is started, ndb_mgmd must be told where to find its configuration file, using the -f or --config-file option. (See Section 21.4.4, “ndb_mgmd — The NDB Cluster Management Server Daemon”, for details.)

    第一次启动ndb-mgmd时,必须使用-f或--config file选项告诉它在哪里找到配置文件。(有关详细信息,请参阅21.4.4节,“ndb_mgmd-ndb群集管理服务器守护程序”。)

    For additional options which can be used with ndb_mgmd, see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

    有关可与ndb_mgmd一起使用的其他选项,请参阅第21.4.32节“ndb群集程序通用选项-ndb群集程序通用选项”。

  2. On each of the data node hosts, run this command to start the ndbd process:

    在每个数据节点主机上,运行此命令以启动ndbd进程:

    shell> ndbd
    
  3. If you used RPM files to install MySQL on the cluster host where the SQL node is to reside, you can (and should) use the supplied startup script to start the MySQL server process on the SQL node.

    如果使用rpm文件在sql节点所在的集群主机上安装mysql,则可以(而且应该)使用提供的启动脚本在sql节点上启动mysql服务器进程。

If all has gone well, and the cluster has been set up correctly, the cluster should now be operational. You can test this by invoking the ndb_mgm management node client. The output should look like that shown here, although you might see some slight differences in the output depending upon the exact version of MySQL that you are using:

如果一切顺利,并且集群设置正确,那么集群现在应该可以运行了。您可以通过调用ndb-mgm管理节点客户机来测试这一点。输出应该如图所示,不过根据所使用的mysql的确切版本,您可能会看到输出中的一些细微差别:

shell> ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @198.51.100.30  (Version: 5.7.28-ndb-7.5.16, Nodegroup: 0, *)
id=3    @198.51.100.40  (Version: 5.7.28-ndb-7.5.16, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @198.51.100.10  (Version: 5.7.28-ndb-7.5.16)

[mysqld(API)]   1 node(s)
id=4    @198.51.100.20  (Version: 5.7.28-ndb-7.5.16)

The SQL node is referenced here as [mysqld(API)], which reflects the fact that the mysqld process is acting as an NDB Cluster API node.

这里引用的sql节点是[mysqld(api)],它反映了mysqld进程充当ndb集群api节点的事实。

Note

The IP address shown for a given NDB Cluster SQL or other API node in the output of SHOW is the address used by the SQL or API node to connect to the cluster data nodes, and not to any management node.

show的输出中为给定ndb集群sql或其他api节点显示的ip地址是sql或api节点用于连接到集群数据节点而不是任何管理节点的地址。

You should now be ready to work with databases, tables, and data in NDB Cluster. See Section 21.2.7, “NDB Cluster Example with Tables and Data”, for a brief discussion.

现在您应该准备好使用ndb集群中的数据库、表和数据了。有关简要讨论,请参阅21.2.7节,“具有表和数据的ndb集群示例”。

21.2.7 NDB Cluster Example with Tables and Data

Note

The information in this section applies to NDB Cluster running on both Unix and Windows platforms.

本节中的信息适用于在unix和windows平台上运行的ndb集群。

Working with database tables and data in NDB Cluster is not much different from doing so in standard MySQL. There are two key points to keep in mind:

在ndb集群中使用数据库表和数据与在标准mysql中使用数据库表和数据没有太大区别。要记住两个要点:

  • For a table to be replicated in the cluster, it must use the NDBCLUSTER storage engine. To specify this, use the ENGINE=NDBCLUSTER or ENGINE=NDB option when creating the table:

    要在群集中复制表,它必须使用ndbcluster存储引擎。要指定此选项,请在创建表时使用engine=ndbcluster或engine=ndb选项:

    CREATE TABLE tbl_name (col_name column_definitions) ENGINE=NDBCLUSTER;
    

    Alternatively, for an existing table that uses a different storage engine, use ALTER TABLE to change the table to use NDBCLUSTER:

    或者,对于使用不同存储引擎的现有表,使用ALTER表更改表以使用NdBase:

    ALTER TABLE tbl_name ENGINE=NDBCLUSTER;
    
  • Every NDBCLUSTER table has a primary key. If no primary key is defined by the user when a table is created, the NDBCLUSTER storage engine automatically generates a hidden one. Such a key takes up space just as does any other table index. (It is not uncommon to encounter problems due to insufficient memory for accommodating these automatically created indexes.)

    每个ndbcluster表都有一个主键。如果在创建表时用户没有定义主键,则ndbcluster存储引擎会自动生成隐藏的主键。这样的键和其他任何表索引一样占用空间。(由于没有足够的内存容纳这些自动创建的索引而遇到问题的情况并不少见。)

If you are importing tables from an existing database using the output of mysqldump, you can open the SQL script in a text editor and add the ENGINE option to any table creation statements, or replace any existing ENGINE options. Suppose that you have the world sample database on another MySQL server that does not support NDB Cluster, and you want to export the City table:

如果使用MySQLDUMP的输出从现有数据库导入表,则可以在文本编辑器中打开SQL脚本,并将引擎选项添加到任何表创建语句中,或者替换现有的任何引擎选项。假设您在另一台不支持ndb集群的mysql服务器上拥有world sample数据库,并且您希望导出city表:

shell> mysqldump --add-drop-table world City > city_table.sql

The resulting city_table.sql file will contain this table creation statement (and the INSERT statements necessary to import the table data):

生成的city_table.sql文件将包含此表创建语句(以及导入表数据所需的INSERT语句):

DROP TABLE IF EXISTS `City`;
CREATE TABLE `City` (
  `ID` int(11) NOT NULL auto_increment,
  `Name` char(35) NOT NULL default '',
  `CountryCode` char(3) NOT NULL default '',
  `District` char(20) NOT NULL default '',
  `Population` int(11) NOT NULL default '0',
  PRIMARY KEY  (`ID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

INSERT INTO `City` VALUES (1,'Kabul','AFG','Kabol',1780000);
INSERT INTO `City` VALUES (2,'Qandahar','AFG','Qandahar',237500);
INSERT INTO `City` VALUES (3,'Herat','AFG','Herat',186800);
(remaining INSERT statements omitted)

You need to make sure that MySQL uses the NDBCLUSTER storage engine for this table. There are two ways that this can be accomplished. One of these is to modify the table definition before importing it into the Cluster database. Using the City table as an example, modify the ENGINE option of the definition as follows:

您需要确保mysql为此表使用ndbcluster存储引擎。有两种方法可以实现这一点。其中之一是在将表定义导入集群数据库之前对其进行修改。以city表为例,修改定义的engine选项如下:

DROP TABLE IF EXISTS `City`;
CREATE TABLE `City` (
  `ID` int(11) NOT NULL auto_increment,
  `Name` char(35) NOT NULL default '',
  `CountryCode` char(3) NOT NULL default '',
  `District` char(20) NOT NULL default '',
  `Population` int(11) NOT NULL default '0',
  PRIMARY KEY  (`ID`)
) ENGINE=NDBCLUSTER DEFAULT CHARSET=latin1;

INSERT INTO `City` VALUES (1,'Kabul','AFG','Kabol',1780000);
INSERT INTO `City` VALUES (2,'Qandahar','AFG','Qandahar',237500);
INSERT INTO `City` VALUES (3,'Herat','AFG','Herat',186800);
(remaining INSERT statements omitted)

This must be done for the definition of each table that is to be part of the clustered database. The easiest way to accomplish this is to do a search-and-replace on the file that contains the definitions and replace all instances of TYPE=engine_name or ENGINE=engine_name with ENGINE=NDBCLUSTER. If you do not want to modify the file, you can use the unmodified file to create the tables, and then use ALTER TABLE to change their storage engine. The particulars are given later in this section.

对于要成为集群数据库一部分的每个表的定义,必须执行此操作。完成此操作的最简单方法是对包含定义的文件执行搜索和替换,并用engine=ndbcluster替换type=engine\u name或engine=engine\u name的所有实例。如果不想修改文件,可以使用未修改的文件创建表,然后使用alter table更改其存储引擎。详情将在本节后面给出。

Assuming that you have already created a database named world on the SQL node of the cluster, you can then use the mysql command-line client to read city_table.sql, and create and populate the corresponding table in the usual manner:

假设您已经在集群的sql节点上创建了一个名为world的数据库,那么您可以使用mysql命令行客户机读取city_table.sql,并以通常的方式创建和填充相应的表:

shell> mysql world < city_table.sql

It is very important to keep in mind that the preceding command must be executed on the host where the SQL node is running (in this case, on the machine with the IP address 198.51.100.20).

必须记住,前面的命令必须在运行sql节点的主机上执行(在本例中,在ip地址为198.51.100.20的计算机上)。

To create a copy of the entire world database on the SQL node, use mysqldump on the noncluster server to export the database to a file named world.sql (for example, in the /tmp directory). Then modify the table definitions as just described and import the file into the SQL node of the cluster like this:

要在SQL节点上创建整个World数据库的副本,请在非群集服务器上使用mysqldump将数据库导出到名为world.sql的文件(例如,在/tmp目录中)。然后修改刚才描述的表定义,并将文件导入集群的sql节点,如下所示:

shell> mysql world < /tmp/world.sql

If you save the file to a different location, adjust the preceding instructions accordingly.

如果将文件保存到其他位置,请相应地调整前面的说明。

Running SELECT queries on the SQL node is no different from running them on any other instance of a MySQL server. To run queries from the command line, you first need to log in to the MySQL Monitor in the usual way (specify the root password at the Enter password: prompt):

在sql节点上运行select查询与在mysql服务器的任何其他实例上运行它们没有区别。要从命令行运行查询,首先需要以通常的方式登录mysql监视器(在enter password:prompt中指定根密码):

shell> mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 5.7.28-ndb-7.5.16

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql>

We simply use the MySQL server's root account and assume that you have followed the standard security precautions for installing a MySQL server, including setting a strong root password. For more information, see Section 2.10.4, “Securing the Initial MySQL Account”.

我们只使用mysql服务器的根帐户,并假设您已经遵循了安装mysql服务器的标准安全预防措施,包括设置强根密码。有关更多信息,请参阅2.10.4节,“保护初始mysql帐户”。

It is worth taking into account that Cluster nodes do not make use of the MySQL privilege system when accessing one another. Setting or changing MySQL user accounts (including the root account) effects only applications that access the SQL node, not interaction between nodes. See Section 21.5.12.2, “NDB Cluster and MySQL Privileges”, for more information.

值得注意的是,集群节点在相互访问时没有使用mysql特权系统。设置或更改mysql用户帐户(包括根帐户)只影响访问sql节点的应用程序,而不影响节点之间的交互。有关详细信息,请参阅21.5.12.2节“ndb cluster和mysql特权”。

If you did not modify the ENGINE clauses in the table definitions prior to importing the SQL script, you should run the following statements at this point:

如果在导入SQL脚本之前未修改表定义中的引擎子句,则此时应运行以下语句:

mysql> USE world;
mysql> ALTER TABLE City ENGINE=NDBCLUSTER;
mysql> ALTER TABLE Country ENGINE=NDBCLUSTER;
mysql> ALTER TABLE CountryLanguage ENGINE=NDBCLUSTER;

Selecting a database and running a SELECT query against a table in that database is also accomplished in the usual manner, as is exiting the MySQL Monitor:

选择数据库并运行对数据库中的表的选择查询也以通常的方式完成,正如退出MySQL监视器:

mysql> USE world;
mysql> SELECT Name, Population FROM City ORDER BY Population DESC LIMIT 5;
+-----------+------------+
| Name      | Population |
+-----------+------------+
| Bombay    |   10500000 |
| Seoul     |    9981619 |
| São Paulo |    9968485 |
| Shanghai  |    9696300 |
| Jakarta   |    9604900 |
+-----------+------------+
5 rows in set (0.34 sec)

mysql> \q
Bye

shell>

Applications that use MySQL can employ standard APIs to access NDB tables. It is important to remember that your application must access the SQL node, and not the management or data nodes. This brief example shows how we might execute the SELECT statement just shown by using the PHP 5.X mysqli extension running on a Web server elsewhere on the network:

使用mysql的应用程序可以使用标准api来访问ndb表。必须记住,应用程序必须访问sql节点,而不是管理或数据节点。这个简单的示例演示了如何使用在网络上其他地方的Web服务器上运行的php 5.x mysqli扩展来执行刚才显示的select语句:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
  "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
  <meta http-equiv="Content-Type"
           content="text/html; charset=iso-8859-1">
  <title>SIMPLE mysqli SELECT</title>
</head>
<body>
<?php
  # connect to SQL node:
  $link = new mysqli('198.51.100.20', 'root', 'root_password', 'world');
  # parameters for mysqli constructor are:
  #   host, user, password, database

  if( mysqli_connect_errno() )
    die("Connect failed: " . mysqli_connect_error());

  $query = "SELECT Name, Population
            FROM City
            ORDER BY Population DESC
            LIMIT 5";

  # if no errors...
  if( $result = $link->query($query) )
  {
?>
<table border="1" width="40%" cellpadding="4" cellspacing ="1">
  <tbody>
  <tr>
    <th width="10%">City</th>
    <th>Population</th>
  </tr>
<?
    # then display the results...
    while($row = $result->fetch_object())
      printf("<tr>\n  <td align=\"center\">%s</td><td>%d</td>\n</tr>\n",
              $row->Name, $row->Population);
?>
  </tbody
</table>
<?
  # ...and verify the number of rows that were retrieved
    printf("<p>Affected rows: %d</p>\n", $link->affected_rows);
  }
  else
    # otherwise, tell us what went wrong
    echo mysqli_error();

  # free the result set and the mysqli connection object
  $result->close();
  $link->close();
?>
</body>
</html>

We assume that the process running on the Web server can reach the IP address of the SQL node.

我们假设在web服务器上运行的进程可以到达sql节点的ip地址。

In a similar fashion, you can use the MySQL C API, Perl-DBI, Python-mysql, or MySQL Connectors to perform the tasks of data definition and manipulation just as you would normally with MySQL.

以类似的方式,您可以使用mysql c api、perl dbi、python mysql或mysql连接器来执行数据定义和操作任务,就像通常使用mysql一样。

21.2.8 Safe Shutdown and Restart of NDB Cluster

To shut down the cluster, enter the following command in a shell on the machine hosting the management node:

要关闭群集,请在托管管理节点的计算机上的shell中输入以下命令:

shell> ndb_mgm -e shutdown

The -e option here is used to pass a command to the ndb_mgm client from the shell. (See Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”, for more information about this option.) The command causes the ndb_mgm, ndb_mgmd, and any ndbd or ndbmtd processes to terminate gracefully. Any SQL nodes can be terminated using mysqladmin shutdown and other means. On Windows platforms, assuming that you have installed the SQL node as a Windows service, you can use SC STOP service_name or NET STOP service_name.

这里的-e选项用于从shell向ndb-mgm客户端传递命令。(有关此选项的详细信息,请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。)此命令会导致ndb-mgm、ndb-mgmd和任何ndbd或ndbmtd进程正常终止。任何sql节点都可以使用mysqladmin关闭和其他方式终止。在Windows平台上,假设已将SQL节点安装为Windows服务,则可以使用sc stop service_name或net stop service_name。

To restart the cluster on Unix platforms, run these commands:

要在UNIX平台上重新启动群集,请运行以下命令:

  • On the management host (198.51.100.10 in our example setup):

    在管理主机上(在我们的示例设置中为198.51.100.10):

    shell> ndb_mgmd -f /var/lib/mysql-cluster/config.ini
    
  • On each of the data node hosts (198.51.100.30 and 198.51.100.40):

    在每个数据节点主机(198.51.100.30和198.51.100.40)上:

    shell> ndbd
    
  • Use the ndb_mgm client to verify that both data nodes have started successfully.

    使用ndb_mgm客户端验证两个数据节点是否已成功启动。

  • On the SQL host (198.51.100.20):

    在SQL主机(198.51.100.20)上:

    shell> mysqld_safe &
    

On Windows platforms, assuming that you have installed all NDB Cluster processes as Windows services using the default service names (see Section 21.2.4.4, “Installing NDB Cluster Processes as Windows Services”), you can restart the cluster as follows:

在Windows平台上,假设已使用默认服务名称将所有NDB群集进程安装为Windows服务(请参阅21.2.4.4节“将NDB群集进程安装为Windows服务”),则可以按如下方式重新启动群集:

  • On the management host (198.51.100.10 in our example setup), execute the following command:

    在管理主机(在我们的示例设置中为198.51.100.10)上,执行以下命令:

    C:\> SC START ndb_mgmd
    
  • On each of the data node hosts (198.51.100.30 and 198.51.100.40), execute the following command:

    在每个数据节点主机(198.51.100.30和198.51.100.40)上,执行以下命令:

    C:\> SC START ndbd
    
  • On the management node host, use the ndb_mgm client to verify that the management node and both data nodes have started successfully (see Section 21.2.4.3, “Initial Startup of NDB Cluster on Windows”).

    在管理节点主机上,使用ndb_mgm客户端验证管理节点和两个数据节点是否已成功启动(请参阅第21.2.4.3节“ndb群集在Windows上的初始启动”)。

  • On the SQL node host (198.51.100.20), execute the following command:

    在SQL节点主机(198.51.100.20)上,执行以下命令:

    C:\> SC START mysql
    

In a production setting, it is usually not desirable to shut down the cluster completely. In many cases, even when making configuration changes, or performing upgrades to the cluster hardware or software (or both), which require shutting down individual host machines, it is possible to do so without shutting down the cluster as a whole by performing a rolling restart of the cluster. For more information about doing this, see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”.

在生产环境中,通常不希望完全关闭集群。在许多情况下,即使在进行需要关闭单个主机的配置更改或对群集硬件或软件(或两者)执行升级时,也可以通过对群集执行滚动重新启动而不关闭整个群集。有关执行此操作的更多信息,请参阅第21.5.5节“执行ndb群集的滚动重新启动”。

21.2.9 Upgrading and Downgrading NDB Cluster

This section provides information about NDB Cluster software and table file compatibility between different NDB Cluster 7.5 releases with regard to performing upgrades and downgrades as well as compatibility matrices and notes. You are expected already to be familiar with installing and configuring an NDB Cluster prior to attempting an upgrade or downgrade. See Section 21.3, “Configuration of NDB Cluster”.

本节提供有关ndb cluster软件和不同ndb cluster 7.5发行版之间表文件兼容性的信息(有关执行升级和降级),以及兼容性列表和注释。在尝试升级或降级之前,您应该已经熟悉了ndb集群的安装和配置。见第21.3节“ndb集群的配置”。

Important

Only compatibility between MySQL versions with regard to NDBCLUSTER is taken into account in this section, and there are likely other issues to be considered. As with any other MySQL software upgrade or downgrade, you are strongly encouraged to review the relevant portions of the MySQL Manual for the MySQL versions from which and to which you intend to migrate, before attempting an upgrade or downgrade of the NDB Cluster software. See Section 2.11, “Upgrading MySQL”.

本节只考虑mysql版本与ndbcluster的兼容性,可能还需要考虑其他问题。与任何其他mysql软件升级或降级一样,强烈建议您在尝试升级或降级ndb群集软件之前,查看mysql手册的相关部分,了解要从中迁移的mysql版本以及要迁移到的mysql版本。参见2.11节“升级mysql”。

The table shown here provides information on NDB Cluster upgrade and downgrade compatibility among different releases of NDB 7.5. Additional notes about upgrades and downgrades to, from, or within the NDB Cluster 7.5 release series can be found following the table.

下表提供了ndb 7.5不同版本之间ndb集群升级和降级兼容性的信息。有关升级和降级到、从或在ndb cluster 7.5发行版系列内的其他说明,请参见下表。

Upgrades and Downgrades, NDB Cluster 7.5

Figure 21.36 NDB Cluster Upgrade and Downgrade Compatibility, MySQL NDB Cluster 7.5

图21.36 ndb集群升级和降级兼容性,mysql ndb cluster 7.5

Image is titled "MySQL NDB Cluster 7.5" and includes rows with the values "7.5.8" on top and "7.5.1" at the bottom with every 7.5.x cluster release in-between. To the left are arrows pointing up and down. Under this section is an area titled "Key" that explains the up and down arrow as "Online upgrades and downgrades possible". The top value, "7.5.8" today, represents the latest release today but this text might not be updated.

Version support.  The following versions of NDB Cluster are supported for upgrades to GA releases of NDB Cluster 7.5 (7.5.4 and later):

版本支持。以下版本的ndb cluster支持升级到ndb cluster 7.5(7.5.4及更高版本)的ga版本:

  • NDB Cluster 7.4 GA releases (7.4.4 and later)

    NDB群集7.4 GA版本(7.4.4及更高版本)

  • NDB Cluster 7.3 GA releases (7.3.2 and later)

    NDB群集7.3 GA版本(7.3.2及更高版本)

  • NDB Cluster 7.2 GA releases (7.2.4 and later)

    NDB群集7.2 GA版本(7.2.4及更高版本)

Known Issues - NDB 7.5.  The following issues are known to occur when upgrading to or between NDB 7.5 releases:

已知问题-ndb 7.5。升级到ndb 7.5版本或在ndb7.5版本之间升级时,已知会发生以下问题:

  • When upgrading from NDB 7.5.2 or 7.5.3 to a later version, the use of mysqld with the --initialize and --ndbcluster options together caused problems later running mysql_upgrade.

    从ndb 7.5.2或7.5.3升级到更高版本时,mysqld与--initialize和--ndbcluster选项一起使用会在以后运行mysql_upgrade时造成问题。

    When run with --initialize, the server does not require NDB support; having NDB enabled at this time can cause problems with ndbinfo tables. To keep this from happening, the --initialize option now causes mysqld to ignore the --ndbcluster option if the latter is also specified.

    当使用--initialize运行时,服务器不需要ndb支持;此时启用ndb可能会导致ndbinfo表出现问题。为了避免这种情况发生,如果还指定了--ndbcluster选项,那么--initialize选项现在会导致mysqld忽略--ndbcluster选项。

    A workaround for an upgrade that has failed for these reasons can be accomplished as follows:

    由于这些原因而失败的升级的解决方法可以如下:

    1. Perform a rolling restart of the entire cluster

      对整个群集执行滚动重新启动

    2. Delete all .frm files in the data/ndbinfo directory

      删除data/ndbinfo目录中的所有.frm文件

    3. Run mysql_upgrade.

      运行mysql_升级。

    (Bug #81689, Bug #82724, Bug #24521927, Bug #23518923)

    (错误81689,错误82724,错误24521927,错误23518923)

  • During an online upgrade from an NDB Cluster 7.3 release to an NDB 7.4 (or later) release, the failures of several data nodes running the lower version during local checkpoints (LCPs), and just prior to upgrading these nodes, led to additional node failures following the upgrade. This was due to lingering elements of the EMPTY_LCP protocol initiated by the older nodes as part of an LCP-plus-restart sequence, and which is no longer used in NDB 7.4 and later due to LCP optimizations implemented in those versions. This issue was fixed in NDB 7.5.4. (Bug #23129433)

    在从ndb cluster 7.3版本到ndb 7.4(或更高版本)版本的在线升级过程中,在本地检查点(lcp)期间以及在升级这些节点之前运行较低版本的几个数据节点的故障导致升级后出现其他节点故障。这是由于作为lcp plus重新启动序列的一部分,旧节点启动的空的\u lcp协议的延迟元素,并且由于在这些版本中实现的lcp优化,在ndb 7.4和更高版本中不再使用该协议。此问题已在ndb 7.5.4中修复。(错误23129433)

  • Beginning with NDB 7.5.2, the ndb_binlog_index table uses the InnoDB storage engine. (Use of the MyISAM storage engine for this table continues to be supported for backward compatibility.)

    从ndb 7.5.2开始,ndb binlog_索引表使用innodb存储引擎。(为了向后兼容,继续支持对此表使用myisam存储引擎。)

    When upgrading a previous release to NDB 7.5.2 or later, you can use the --force --upgrade-system-tables options with mysql_upgrade so that it performs ALTER TABLE ... ENGINE=INNODB on the ndb_binlog_index table.

    在将以前的版本升级到ndb 7.5.2或更高版本时,可以将--force--upgrade system tables选项用于mysql_upgrade,以便它执行alter table…ENGINE=innodb,位于ndb_binlog_索引表上。

    For more information, see Section 21.6.4, “NDB Cluster Replication Schema and Tables”.

    有关更多信息,请参阅21.6.4节“ndb群集复制模式和表”。

  • Online upgrades from previous versions of NDB Cluster to NDB 7.5.1 were not possible due to missing entries in the matrix used to test upgrade compatibility between versions. (Bug #22024947)

    由于用于测试版本间升级兼容性的矩阵中缺少条目,无法从以前版本的ndb cluster联机升级到ndb 7.5.1。(错误22024947)

    Also in NDB 7.5.1, mysql_upgrade failed to upgrade the sys schema if a sys database directory existed but was empty. (Bug #81352, Bug #23249846, Bug #22875519)

    此外,在NDB 7.5.1中,如果存在sys数据库目录但MySqLi升级失败,则无法升级sys架构。(错误81352,错误23249846,错误22875519)

Known issues - NDB 7.6.  The following issues are known to occur when upgrading to or between NDB 7.6 releases:

已知问题-NDB 7.6。升级到ndb 7.6版本或在ndb7.6版本之间升级时,已知会发生以下问题:

Changes in Disk Data file format.  Due to changes in disk format, upgrading to or downgrading from either of the versions listed here requires an initial node restart of each data node:

磁盘数据文件格式的更改。由于磁盘格式的更改,从此处列出的任一版本升级到或降级需要重新启动每个数据节点的初始节点:

  • NDB 7.6.2

    国家开发银行7.6.2

  • NDB 7.6.4

    国家开发银行7.6.4

To avoid problems relating to the old format, you should re-create any existing tablespaces and undo log file groups when upgrading. You can do this by performing an initial restart of each data node (that is, using the --initial option) as part of the upgrade process.

为了避免与旧格式相关的问题,在升级时,您应该重新创建任何现有的表空间和撤销日志文件组。作为升级过程的一部分,您可以执行每个数据节点的初始重新启动(即使用--initial选项)。

If you are using Disk Data tables, a downgrade from any NDB 7.6 release to any NDB 7.5 or earlier release requires that you restart all data nodes with --initial as part of the downgrade process. This is because NDB 7.5 and earlier release series are not able to read the new Disk Data file format.

如果您使用的是磁盘数据表,从任何ndb 7.6版本降级到任何ndb7.5或更早版本需要重新启动所有数据节点,并在降级过程中使用--initial。这是因为ndb 7.5和早期版本系列无法读取新的磁盘数据文件格式。

Important

Upgrading to NDB 7.6.4 or later from an earlier release, or downgrading from NDB 7.6.4 or later to an earlier release, requires purging then re-creating the NDB data node file system, which means that each data node must be restarted using the --initial option as part of the rolling restart or system restart normally required. (Starting a data node with no file system is already equivalent to an initial restart; in such cases, --initial is implied and not required. This is unchanged from previous releases of NDB Cluster.)

从早期版本升级到ndb 7.6.4或更高版本,或从ndb 7.6.4或更高版本降级到早期版本,需要清除然后重新创建ndb数据节点文件系统,这意味着必须使用--initial选项重新启动每个数据节点,作为滚动重新启动或通常需要重新启动系统的一部分。(在没有文件系统的情况下启动数据节点已经相当于初始重新启动;在这种情况下,-initial是隐含的,而不是必需的。这与以前的ndb集群版本没有变化。)

When such a restart is performed as part of an upgrade to NDB 7.6.4 or later, any existing LCP files are checked for the presence of the LCP sysfile, indicating that the existing data node file system was written using NDB 7.6.4 or later. If such a node file system exists, but does not contain the sysfile, and if any data nodes are restarted without the --initial option, NDB causes the restart to fail with an appropriate error message.

当作为NDB 7.64或更高版本升级的一部分执行这样的重启时,检查存在的LCP文件是否存在LCP Sysfile,指示现有的数据节点文件系统是使用NDB 7.64或更高版本编写的。如果存在这样的节点文件系统,但不包含Sysfile,并且如果没有--初始选项重新启动任何数据节点,则NDB会导致重启失败,并有一个适当的错误消息。

You should also be aware that no such protection is possible when downgrading from NDB 7.6.4 or later to a release previous to NDB 7.6.4.

您还应该知道,当从ndb 7.6.4或更高版本降级到ndb7.6.4之前的版本时,不可能提供这样的保护。

21.3 Configuration of NDB Cluster

A MySQL server that is part of an NDB Cluster differs in one chief respect from a normal (nonclustered) MySQL server, in that it employs the NDB storage engine. This engine is also referred to sometimes as NDBCLUSTER, although NDB is preferred.

作为ndb集群一部分的mysql服务器在一个主要方面不同于普通(非集群)mysql服务器,因为它使用ndb存储引擎。此引擎有时也称为ndbcluster,尽管ndb是首选引擎。

To avoid unnecessary allocation of resources, the server is configured by default with the NDB storage engine disabled. To enable NDB, you must modify the server's my.cnf configuration file, or start the server with the --ndbcluster option.

为了避免不必要的资源分配,服务器默认配置为禁用ndb存储引擎。要启用ndb,必须修改服务器的my.cnf配置文件,或使用--ndbcluster选项启动服务器。

This MySQL server is a part of the cluster, so it also must know how to access a management node to obtain the cluster configuration data. The default behavior is to look for the management node on localhost. However, should you need to specify that its location is elsewhere, this can be done in my.cnf, or with the mysql client. Before the NDB storage engine can be used, at least one management node must be operational, as well as any desired data nodes.

这个mysql服务器是集群的一部分,因此它还必须知道如何访问管理节点来获取集群配置数据。默认行为是在本地主机上查找管理节点。但是,如果需要指定它的位置在别处,可以在my.cnf中完成,也可以使用mysql客户端完成。在可以使用ndb存储引擎之前,必须至少有一个管理节点以及任何所需的数据节点处于可操作状态。

For more information about --ndbcluster and other mysqld options specific to NDB Cluster, see Section 21.3.3.9.1, “MySQL Server Options for NDB Cluster”.

有关--ndb cluster和其他特定于ndb cluster的mysqld选项的详细信息,请参阅21.3.3.9.1节“ndb cluster的mysql服务器选项”。

You can use also the NDB Cluster Auto-Installer to set up and deploy an NDB Cluster on one or more hosts using a browser-based GUI. For more information, see Section 21.2.1, “The NDB Cluster Auto-Installer (NDB 7.5)”.

也可以使用ndb cluster自动安装程序,使用基于浏览器的gui在一个或多个主机上设置和部署ndb集群。有关更多信息,请参阅21.2.1节,“ndb群集自动安装程序(ndb 7.5)”。

For general information about installing NDB Cluster, see Section 21.2, “NDB Cluster Installation”.

有关安装ndb群集的一般信息,请参阅21.2节“ndb群集安装”。

21.3.1 Quick Test Setup of NDB Cluster

To familiarize you with the basics, we will describe the simplest possible configuration for a functional NDB Cluster. After this, you should be able to design your desired setup from the information provided in the other relevant sections of this chapter.

为了让您熟悉基本知识,我们将描述功能性ndb集群的最简单配置。在此之后,您应该能够根据本章其他相关章节中提供的信息设计所需的设置。

First, you need to create a configuration directory such as /var/lib/mysql-cluster, by executing the following command as the system root user:

首先,您需要创建一个配置目录,例如/var/lib/mysql cluster,方法是以系统根用户的身份执行以下命令:

shell> mkdir /var/lib/mysql-cluster

In this directory, create a file named config.ini that contains the following information. Substitute appropriate values for HostName and DataDir as necessary for your system.

在此目录中,创建一个名为config.ini的文件,其中包含以下信息。根据系统的需要,用适当的值替换hostname和datadir。

# file "config.ini" - showing minimal setup consisting of 1 data node,
# 1 management server, and 3 MySQL servers.
# The empty default sections are not required, and are shown only for
# the sake of completeness.
# Data nodes must provide a hostname but MySQL Servers are not required
# to do so.
# If you don't know the hostname for your machine, use localhost.
# The DataDir parameter also has a default value, but it is recommended to
# set it explicitly.
# Note: [db], [api], and [mgm] are aliases for [ndbd], [mysqld], and [ndb_mgmd],
# respectively. [db] is deprecated and should not be used in new installations.

[ndbd default]
NoOfReplicas= 1

[mysqld  default]
[ndb_mgmd default]
[tcp default]

[ndb_mgmd]
HostName= myhost.example.com

[ndbd]
HostName= myhost.example.com
DataDir= /var/lib/mysql-cluster

[mysqld]
[mysqld]
[mysqld]

You can now start the ndb_mgmd management server. By default, it attempts to read the config.ini file in its current working directory, so change location into the directory where the file is located and then invoke ndb_mgmd:

现在可以启动ndb_mgmd管理服务器。默认情况下,它尝试读取当前工作目录中的config.ini文件,因此将位置更改为文件所在的目录,然后调用ndb_mgmd:

shell> cd /var/lib/mysql-cluster
shell> ndb_mgmd

Then start a single data node by running ndbd:

然后通过运行ndbd启动单个数据节点:

shell> ndbd

For command-line options which can be used when starting ndbd, see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

有关在启动ndbd时可以使用的命令行选项,请参阅21.4.32节,“ndb群集程序通用选项-ndb群集程序通用选项”。

By default, ndbd looks for the management server at localhost on port 1186.

默认情况下,ndbd在端口1186的本地主机上查找管理服务器。

Note

If you have installed MySQL from a binary tarball, you will need to specify the path of the ndb_mgmd and ndbd servers explicitly. (Normally, these will be found in /usr/local/mysql/bin.)

如果从二进制tarball安装了mysql,则需要显式指定ndb-mgmd和ndbd服务器的路径。(通常,这些文件会在/usr/local/mysql/bin中找到。)

Finally, change location to the MySQL data directory (usually /var/lib/mysql or /usr/local/mysql/data), and make sure that the my.cnf file contains the option necessary to enable the NDB storage engine:

最后,将位置更改为mysql数据目录(通常为/var/lib/mysql或/usr/local/mysql/data),并确保my.cnf文件包含启用ndb存储引擎所需的选项:

[mysqld]
ndbcluster

You can now start the MySQL server as usual:

现在可以像往常一样启动mysql服务器:

shell> mysqld_safe --user=mysql &

Wait a moment to make sure the MySQL server is running properly. If you see the notice mysql ended, check the server's .err file to find out what went wrong.

请稍候,以确保mysql服务器正常运行。如果您看到mysql结束的通知,请检查服务器的.err文件,找出哪里出错了。

If all has gone well so far, you now can start using the cluster. Connect to the server and verify that the NDBCLUSTER storage engine is enabled:

如果到目前为止一切顺利,现在可以开始使用集群了。连接到服务器并验证ndbcluster存储引擎是否已启用:

shell> mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 5.7.29

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> SHOW ENGINES\G
...
*************************** 12. row ***************************
Engine: NDBCLUSTER
Support: YES
Comment: Clustered, fault-tolerant, memory-based tables
*************************** 13. row ***************************
Engine: NDB
Support: YES
Comment: Alias for NDBCLUSTER
...

The row numbers shown in the preceding example output may be different from those shown on your system, depending upon how your server is configured.

前面示例输出中显示的行号可能与系统中显示的行号不同,具体取决于服务器的配置方式。

Try to create an NDBCLUSTER table:

尝试创建ndbcluster表:

shell> mysql
mysql> USE test;
Database changed

mysql> CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.09 sec)

mysql> SHOW CREATE TABLE ctest \G
*************************** 1. row ***************************
       Table: ctest
Create Table: CREATE TABLE `ctest` (
  `i` int(11) default NULL
) ENGINE=ndbcluster DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

To check that your nodes were set up properly, start the management client:

要检查节点设置是否正确,请启动管理客户端:

shell> ndb_mgm

Use the SHOW command from within the management client to obtain a report on the cluster's status:

使用管理客户端中的show命令获取有关群集状态的报告:

ndb_mgm> SHOW
Cluster Configuration
---------------------
[ndbd(NDB)]     1 node(s)
id=2    @127.0.0.1  (Version: 5.7.28-ndb-7.5.16, Nodegroup: 0, *)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @127.0.0.1  (Version: 5.7.28-ndb-7.5.16)

[mysqld(API)]   3 node(s)
id=3    @127.0.0.1  (Version: 5.7.28-ndb-7.5.16)
id=4 (not connected, accepting connect from any host)
id=5 (not connected, accepting connect from any host)

At this point, you have successfully set up a working NDB Cluster . You can now store data in the cluster by using any table created with ENGINE=NDBCLUSTER or its alias ENGINE=NDB.

此时,您已经成功地设置了一个正常工作的ndb集群。现在,您可以使用engine=ndb cluster或其别名engine=ndb创建的任何表在集群中存储数据。

21.3.2 Overview of NDB Cluster Configuration Parameters, Options, and Variables

The next several sections provide summary tables of NDB Cluster node configuration parameters used in the config.ini file to govern various aspects of node behavior, as well as of options and variables read by mysqld from a my.cnf file or from the command line when run as an NDB Cluster process. Each of the node parameter tables lists the parameters for a given type (ndbd, ndb_mgmd, mysqld, computer, tcp, or shm). All tables include the data type for the parameter, option, or variable, as well as its default, mimimum, and maximum values as applicable.

接下来的几节提供了config.ini文件中用于控制节点行为各个方面的ndb群集节点配置参数的摘要表,以及mysqld在作为ndb群集进程运行时从my.cnf文件或命令行读取的选项和变量的摘要表。每个节点参数表都列出给定类型(ndbd、ndb_mgmd、mysqld、computer、tcp或shm)的参数。所有表都包含参数、选项或变量的数据类型,以及其默认值、MIMIMUM和最大值。

Considerations when restarting nodes.  For node parameters, these tables also indicate what type of restart is required (node restart or system restart)—and whether the restart must be done with --initial—to change the value of a given configuration parameter. When performing a node restart or an initial node restart, all of the cluster's data nodes must be restarted in turn (also referred to as a rolling restart). It is possible to update cluster configuration parameters marked as node online—that is, without shutting down the cluster—in this fashion. An initial node restart requires restarting each ndbd process with the --initial option.

重新启动节点时的注意事项。对于节点参数,这些表还指示需要什么类型的重新启动(节点重新启动或系统重新启动),以及是否必须使用-initial完成重新启动以更改给定配置参数的值。执行节点重新启动或初始节点重新启动时,必须依次重新启动群集的所有数据节点(也称为滚动重新启动)。可以更新标记为node online的集群配置参数,也就是说,无需以这种方式关闭集群。初始节点重新启动需要使用--initial选项重新启动每个ndbd进程。

A system restart requires a complete shutdown and restart of the entire cluster. An initial system restart requires taking a backup of the cluster, wiping the cluster file system after shutdown, and then restoring from the backup following the restart.

系统重新启动需要完全关闭并重新启动整个群集。初始系统重新启动需要对群集进行备份,在关机后擦除群集文件系统,然后在重新启动后从备份中还原。

In any cluster restart, all of the cluster's management servers must be restarted for them to read the updated configuration parameter values.

在任何群集重新启动中,都必须重新启动群集的所有管理服务器才能读取更新的配置参数值。

Important

Values for numeric cluster parameters can generally be increased without any problems, although it is advisable to do so progressively, making such adjustments in relatively small increments. Many of these can be increased online, using a rolling restart.

数值集群参数的值通常可以在没有任何问题的情况下增加,尽管建议逐步增加,以相对较小的增量进行此类调整。其中许多可以通过滚动重启在线增加。

However, decreasing the values of such parameters—whether this is done using a node restart, node initial restart, or even a complete system restart of the cluster—is not to be undertaken lightly; it is recommended that you do so only after careful planning and testing. This is especially true with regard to those parameters that relate to memory usage and disk space, such as MaxNoOfTables, MaxNoOfOrderedIndexes, and MaxNoOfUniqueHashIndexes. In addition, it is the generally the case that configuration parameters relating to memory and disk usage can be raised using a simple node restart, but they require an initial node restart to be lowered.

但是,降低这些参数的值(无论是使用节点重新启动、节点初始重新启动还是集群的完整系统重新启动)都不是一件容易的事情;建议您只有在仔细规划和测试之后才能这样做。对于那些与内存使用和磁盘空间相关的参数,如maxnooftables、maxnooforderedindexes和maxnoofuniquehashindexes,尤其如此。此外,通常情况下,可以使用简单的节点重新启动来提升与内存和磁盘使用有关的配置参数,但它们需要降低初始节点重新启动。

Because some of these parameters can be used for configuring more than one type of cluster node, they may appear in more than one of the tables.

由于其中一些参数可用于配置多个类型的群集节点,因此它们可能出现在多个表中。

Note

4294967039 often appears as a maximum value in these tables. This value is defined in the NDBCLUSTER sources as MAX_INT_RNIL and is equal to 0xFFFFFEFF, or 232 − 28 − 1.

4294967039通常在这些表中显示为最大值。该值在ndbcluster源中定义为max_int_rnil,等于0xfffffeff或232−28−1。

21.3.2.1 NDB Cluster Data Node Configuration Parameters

The listings in this section provide information about parameters used in the [ndbd] or [ndbd default] sections of a config.ini file for configuring NDB Cluster data nodes. For detailed descriptions and other additional information about each of these parameters, see Section 21.3.3.6, “Defining NDB Cluster Data Nodes”.

本节中的清单提供了有关config.ini文件的[ndbd]或[ndbd default]部分中用于配置ndb集群数据节点的参数的信息。有关这些参数的详细说明和其他附加信息,请参见第21.3.3.6节“定义ndb集群数据节点”。

These parameters also apply to ndbmtd, the multithreaded version of ndbd. For more information, see Section 21.4.3, “ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)”.

这些参数也适用于ndbmtd,ndbd的多线程版本。有关更多信息,请参阅21.4.3节,“ndbmtd-ndb集群数据节点守护程序(多线程)”。

  • Arbitration: How arbitration should be performed to avoid split-brain issues in the event of node failure.

    仲裁:如何进行仲裁,以避免在发生节点故障时出现脑分裂问题。

  • ArbitrationTimeout: Maximum time (milliseconds) database partition waits for arbitration signal

    仲裁超时:最大时间(毫秒)数据库分区等待仲裁信号

  • BackupDataBufferSize: Default size of databuffer for a backup (in bytes)

    backUpdateBufferSize:备份数据缓冲区的默认大小(字节)

  • BackupDataDir: Path to where to store backups. Note that the string '/BACKUP' is always appended to this setting, so that the *effective* default is FileSystemPath/BACKUP.

    backupdatedir:存储备份的位置的路径。请注意,字符串'/backup'总是附加到此设置中,因此*effective*默认值为filesystem移情/backup。

  • BackupDiskWriteSpeedPct: Sets the percentage of the data node's allocated maximum write speed (MaxDiskWriteSpeed) to reserve for LCPs when starting abackup.

    BackupDiskWriteSpeedPct:设置数据节点分配的最大写入速度(Max DISKRead SePED)的百分比,以便在开始打包时为LCPS保留。

  • BackupLogBufferSize: Default size of log buffer for a backup (in bytes)

    backuplogbuffersize:备份日志缓冲区的默认大小(字节)

  • BackupMaxWriteSize: Maximum size of file system writes made by backup (in bytes)

    BuffUpMax WrutsId:由备份(以字节为单位)编写的文件系统写入的最大大小

  • BackupMemory: Total memory allocated for backups per node (in bytes)

    backupmemory:为每个节点的备份分配的总内存(字节)

  • BackupReportFrequency: Frequency of backup status reports during backup in seconds

    backupreportfrequency:备份期间备份状态报告的频率(秒)

  • BackupWriteSize: Default size of file system writes made by backup (in bytes)

    backupwritesize:备份文件系统写入的默认大小(字节)

  • BatchSizePerLocalScan: Used to calculate the number of lock records for scan with hold lock

    batchSizePerLocalScan:用于计算用保持锁扫描的锁记录数

  • BuildIndexThreads: Number of threads to use for building ordered indexes during a system or node restart. Also applies when running ndb_restore --rebuild-indexes. Setting this parameter to 0 disables multithreaded building of ordered indexes.

    buildIndexThreads:在系统或节点重新启动期间用于生成有序索引的线程数。也适用于运行ndb_restore时--重建索引。将此参数设置为0将禁用多线程生成有序索引。

  • CompressedBackup: Use zlib to compress backups as they are written

    compressedbackup:在写入备份时使用zlib压缩备份

  • CompressedLCP: Write compressed LCPs using zlib

    compressedlcp:使用zlib写入压缩的lcp

  • ConnectCheckIntervalDelay: Time between data node connectivity check stages. Data node is considered suspect after 1 interval and dead after 2 intervals with no response.

    connectcheckintervaldelay:数据节点连接检查阶段之间的时间。数据节点在1个间隔后被认为是可疑的,在2个间隔后被认为是死的,没有响应。

  • CrashOnCorruptedTuple: When enabled, forces node to shut down whenever it detects a corrupted tuple.

    crashoncorruptedtuple:启用时,当节点检测到损坏的元组时,强制其关闭。

  • DataDir: Data directory for this node

    datadir:此节点的数据目录

  • DataMemory: Number of bytes on each data node allocated for storing data; subject to available system RAM and size of IndexMemory.

    数据存储器:在每个数据节点上分配用于存储数据的字节数,以可用的系统RAM和索引存储器的大小为准。

  • DefaultHashMapSize: Set size (in buckets) to use for table hash maps. Three values are supported: 0, 240, and 3840. Intended primarily for upgrades and downgrades within NDB 7.2.

    DefaultHashMapSize:设置用于表哈希映射的大小(以桶为单位)。支持三个值:0、240和3840。主要用于ndb 7.2中的升级和降级。

  • DictTrace: Enable DBDICT debugging; for NDB development

    dicttrace:启用dbdict调试;用于ndb开发

  • DiskIOThreadPool: Number of unbound threads for file access (currently only for Disk Data); known as IOThreadPool before MySQL Cluster NDB 6.4.3.

    disk iothreadpool:用于文件访问的未绑定线程数(当前仅用于磁盘数据);在mysql cluster ndb 6.4.3之前称为iothreadpool。

  • Diskless: Run without using the disk

    无盘:不使用磁盘运行

  • DiskPageBufferEntries: Number of 32K page entries to allocate in DiskPageBufferMemory. Very large disk transactions may require increasing this value.

    diskpagebufferentries:要在diskpagebuffermemory中分配的32k页条目数。非常大的磁盘事务可能需要增加这个值。

  • DiskPageBufferMemory: Number of bytes on each data node allocated for the disk page buffer cache

    diskpagebuffermemory:为磁盘页缓冲缓存分配的每个数据节点上的字节数

  • DiskSyncSize: Amount of data written to file before a synch is forced

    diskSyncSize:强制同步之前写入文件的数据量

  • EnablePartialLcp: Enable partial LCP (true); if this is disabled (false), all LCPs write full checkpoints.

    enable partial lcp:启用部分lcp(true);如果禁用(false),则所有lcp都会写入完整的检查点。

  • EnableRedoControl: Enable adaptive checkpointing speed for controlling redo log usage

    enableredocontrol:启用自适应检查点速度以控制重做日志的使用

  • EventLogBufferSize: Size of circular buffer for NDB log events within data nodes.

    eventlogbuffersize:数据节点内ndb日志事件的循环缓冲区大小。

  • ExecuteOnComputer: String referencing an earlier defined COMPUTER

    ExecuteOnComputer:引用早期定义的计算机的字符串

  • ExtraSendBufferMemory: Memory to use for send buffers in addition to any allocated by TotalSendBufferMemory or SendBufferMemory. Default (0) allows up to 16MB.

    ExtraSendBufferMemory:除了TotalSendBufferMemory或SendBufferMemory分配的任何内存外,还用于发送缓冲区的内存。默认值(0)允许高达16MB。

  • FileSystemPath: Path to directory where the data node stores its data (directory must exist)

    FielSistMePATH:数据节点存储数据的目录路径(目录必须存在)

  • FileSystemPathDataFiles: Path to directory where the data node stores its Disk Data files. The default value is FilesystemPathDD, if set; otherwise, FilesystemPath is used if it is set; otherwise, the value of DataDir is used.

    文件系统移情数据文件:数据节点存储其磁盘数据文件的目录的路径。如果已设置,则默认值为fileSystemSynthedd;否则,如果已设置,则使用fileSystemSynthed;否则,使用datadir的值。

  • FileSystemPathDD: Path to directory where the data node stores its Disk Data and undo files. The default value is FileSystemPath, if set; otherwise, the value of DataDir is used.

    filesystemsymphodd:数据节点存储其磁盘数据和撤消文件的目录的路径。默认值为filesystupher(如果已设置);否则,将使用datadir的值。

  • FileSystemPathUndoFiles: Path to directory where the data node stores its undo files for Disk Data. The default value is FilesystemPathDD, if set; otherwise, FilesystemPath is used if it is set; otherwise, the value of DataDir is used.

    文件系统移情撤消文件:数据节点为磁盘数据存储其撤消文件的目录的路径。如果已设置,则默认值为fileSystemSynthedd;否则,如果已设置,则使用fileSystemSynthed;否则,使用datadir的值。

  • FragmentLogFileSize: Size of each redo log file

    fragmentlogfilesize:每个重做日志文件的大小

  • HeartbeatIntervalDbApi: Time between API node-data node heartbeats. (API connection closed after 3 missed heartbeats)

    HeartBeatIntervalDBAPI:API节点数据节点心跳之间的时间。(3次心跳丢失后,API连接关闭)

  • HeartbeatIntervalDbDb: Time between data node-to-data node heartbeats; data node considered dead after 3 missed heartbeats

    HeartBeatIntervalDBDB:数据节点到数据节点的心跳之间的时间;在错过3次心跳后,数据节点被认为已死亡

  • HeartbeatOrder: Sets the order in which data nodes check each others' heartbeats for determining whether a given node is still active and connected to the cluster. Must be zero for all data nodes or distinct nonzero values for all data nodes; see documentation for further guidance.

    heartbeatorder:设置数据节点检查彼此心跳以确定给定节点是否仍处于活动状态并连接到群集的顺序。对于所有数据节点必须为零,或者对于所有数据节点必须为非零值;有关进一步的指导,请参阅文档。

  • HostName: Host name or IP address for this data node.

    主机名:此数据节点的主机名或IP地址。

  • IndexMemory: Number of bytes on each data node allocated for storing indexes; subject to available system RAM and size of DataMemory. Deprecated in NDB 7.6.2 and later.

    IndexMemory:每个数据节点上分配用于存储索引的字节数;取决于可用的系统RAM和数据内存的大小。在ndb 7.6.2及更高版本中不推荐使用。

  • IndexStatAutoCreate: Enable/disable automatic statistics collection when indexes are created.

    indexstatautocreate:在创建索引时启用/禁用自动统计信息收集。

  • IndexStatAutoUpdate: Monitor indexes for changes and trigger automatic statistics updates

    indexStatAutoUpdate:监视更改的索引并触发自动统计信息更新

  • IndexStatSaveScale: Scaling factor used in determining size of stored index statistics.

    indexstatsavescale:用于确定存储索引统计信息大小的缩放因子。

  • IndexStatSaveSize: Maximum size in bytes for saved statistics per index.

    索引为每个索引保存的统计数据的最大字节大小。

  • IndexStatTriggerPct: Threshold percent change in DML operations for index statistics updates. The value is scaled down by IndexStatTriggerScale.

    indexStatTriggerPCT:索引统计信息更新的DML操作中的阈值百分比更改。该值按IndexStatTriggerScale缩小。

  • IndexStatTriggerScale: Scale down IndexStatTriggerPct by this amount, multiplied by the base 2 logarithm of the index size, for a large index. Set to 0 to disable scaling.

    indexStatTriggerScale:对于大型索引,将indexStatTriggerPCT按此量乘以索引大小的底2对数。设置为0可禁用缩放。

  • IndexStatUpdateDelay: Minimum delay between automatic index statistics updates for a given index. 0 means no delay.

    indexStateUpdateDelay:给定索引的自动索引统计信息更新之间的最小延迟。0表示无延迟。

  • InitFragmentLogFiles: Initialize fragment logfiles (sparse/full)

    initfragmentlogfiles:初始化片段日志文件(稀疏/完整)

  • InitialLogFileGroup: Describes a log file group that is created during an initial start. See documentation for format.

    initiallogfilegroup:描述在初始启动期间创建的日志文件组。格式见文档。

  • InitialNoOfOpenFiles: Initial number of files open per data node. (One thread is created per file)

    initialnoofopenfiles:每个数据节点打开的文件的初始数目。(每个文件创建一个线程)

  • InitialTablespace: Describes a tablespace that is created during an initial start. See documentation for format.

    initial tablespace:描述在初始启动期间创建的表空间。格式见文档。

  • InsertRecoveryWork: Percentage of RecoveryWork used for inserted rows; has no effect unless partial local checkpoints are in use

    insertrecoverywork:用于插入行的recoverywork的百分比;除非使用部分本地检查点,否则无效

  • LateAlloc: Allocate memory after the connection to the management server has been established.

    LateAlloc:在建立到管理服务器的连接后分配内存。

  • LcpScanProgressTimeout: Maximum time that local checkpoint fragment scan can be stalled before node is shut down to ensure systemwide LCP progress. Use 0 to disable.

    LcpScanProgressTimeout:在关闭节点之前,本地检查点片段扫描的最大时间可能会停滞,以确保系统范围的LCP进程。使用0禁用。

  • LockExecuteThreadToCPU: A comma-delimited list of CPU IDs

    lockeexecutethreadtocpu:以逗号分隔的cpu id列表

  • LockMaintThreadsToCPU: CPU ID indicating which CPU runs the maintenance threads

    lockmaintthreadstocpu:指示哪个cpu运行维护线程的cpu id

  • LockPagesInMainMemory: Previously: If set to true/1, then NDB Cluster data is not swapped out to disk. In MySQL 5.0.36/5.1.15 and later: 0=disable locking, 1=lock after memory allocation, 2=lock before memory allocation

    lockPagesInMainMemory:先前:如果设置为true/1,则ndb集群数据不会交换到磁盘。在mysql 5.0.36/5.1.15及更高版本中:0=禁用锁定,1=内存分配后锁定,2=内存分配前锁定

  • LogLevelCheckpoint: Log level of local and global checkpoint information printed to stdout

    log level checkpoint:打印到stdout的本地和全局检查点信息的日志级别

  • LogLevelCongestion: Level of congestion information printed to stdout

    loglevelconclusion:打印到stdout的拥塞信息级别

  • LogLevelConnection: Level of node connect/disconnect information printed to stdout

    loglevelconnection:打印到stdout的节点连接/断开信息的级别

  • LogLevelError: Transporter, heartbeat errors printed to stdout

    loglevelerror:transporter,heartbeat errors打印到stdout

  • LogLevelInfo: Heartbeat and log information printed to stdout

    loglevelinfo:打印到stdout的心跳和日志信息

  • LogLevelNodeRestart: Level of node restart and node failure information printed to stdout

    loglevelnoderestart:打印到stdout的节点重启级别和节点故障信息

  • LogLevelShutdown: Level of node shutdown information printed to stdout

    loglevelshutdown:打印到stdout的节点关闭信息的级别

  • LogLevelStartup: Level of node startup information printed to stdout

    loglevelstartup:打印到stdout的节点启动信息的级别

  • LogLevelStatistic: Level of transaction, operation, and transporter information printed to stdout

    loglevelstatistic:打印到stdout的事务、操作和传输程序信息的级别

  • LongMessageBuffer: Number of bytes allocated on each data node for internal long messages

    longMessageBuffer:为内部长消息在每个数据节点上分配的字节数

  • MaxAllocate: Maximum size of allocation to use when allocating memory for tables

    MaxAllocate:分配表内存时使用的最大分配大小

  • MaxBufferedEpochs: Allowed numbered of epochs that a subscribing node can lag behind (unprocessed epochs). Exceeding will cause lagging subscribers to be disconnected.

    maxBufferedepochs:订阅节点可以滞后的允许数量的epoch(未处理的epoch)。超过将导致延迟订阅服务器断开连接。

  • MaxBufferedEpochBytes: Total number of bytes allocated for buffering epochs.

    maxBufferedepochbytes:为缓冲段分配的字节总数。

  • MaxDiskWriteSpeed: Maximum number of bytes per second that can be written by LCP and backup when no restarts are ongoing.

    MaxDiskWriteSpeed:当没有重新启动时,每秒可由LCP和备份写入的最大字节数。

  • MaxDiskWriteSpeedOtherNodeRestart: Maximum number of bytes per second that can be written by LCP and backup when another node is restarting.

    MaxDiskWriteSpeedOtherNodeRestart:当另一个节点重新启动时,LCP和备份可以写入每秒的最大字节数。

  • MaxDiskWriteSpeedOwnRestart: Maximum number of bytes per second that can be written by LCP and backup when this node is restarting.

    MaxDiskWriteSpeedOwnRestart:当节点重新启动时,LCP和备份可以写入每秒的最大字节数。

  • MaxFKBuildBatchSize: Maximum scan batch size to use for building foreign keys. Increasing this value may speed up builds of foreign keys but impacts ongoing traffic as well.

    MaxFKBuildBatchSize:用于构建外键的最大扫描批量大小。增加此值可能会加快外键的生成,但也会影响正在进行的通信量。

  • MaxDMLOperationsPerTransaction: Limit size of a transaction; aborts the transaction if it requires more than this many DML operations. Set to 0 to disable.

    MaxDmloperationsPerTransaction:限制事务的大小;如果事务需要的DML操作超过此数目,则中止该事务。设置为0以禁用。

  • MaxLCPStartDelay: Time in seconds that LCP polls for checkpoint mutex (to allow other data nodes to complete metadata synchronization), before putting itself in lock queue for parallel recovery of table data.

    maxlcpstartdelay:在将自己放入锁队列以并行恢复表数据之前,lcp轮询检查点互斥量(以允许其他数据节点完成元数据同步)的时间(秒)。

  • MaxNoOfAttributes: Suggests a total number of attributes stored in database (sum over all tables)

    maxnoofattributes:建议存储在数据库中的属性总数(对所有表求和)

  • MaxNoOfConcurrentIndexOperations: Total number of index operations that can execute simultaneously on one data node

    maxnoofconcurrentindexoperations:可以在一个数据节点上同时执行的索引操作总数

  • MaxNoOfConcurrentOperations: Maximum number of operation records in transaction coordinator

    Max NoFunCurrutsOffice:事务协调器中操作记录的最大数量

  • MaxNoOfConcurrentScans: Maximum number of scans executing concurrently on the data node

    Max NoFoCurrutsSCANS:数据节点上并发执行的最大扫描次数

  • MaxNoOfConcurrentSubOperations: Maximum number of concurrent subscriber operations

    Max NoFunCurrutsSub操作:并发用户操作的最大数目

  • MaxNoOfConcurrentTransactions: Maximum number of transactions executing concurrently on this data node, the total number of transactions that can be executed concurrently is this value times the number of data nodes in the cluster.

    Max NoFoCurrTurnTrase:在这个数据节点上并发执行的事务的最大数量,可以同时执行的事务的总数是这个值乘以集群中的数据节点的数量。

  • MaxNoOfFiredTriggers: Total number of triggers that can fire simultaneously on one data node

    maxNoOffiredTriggers:可以在一个数据节点上同时触发的触发器总数

  • MaxNoOfLocalOperations: Maximum number of operation records defined on this data node

    Max NoFoLoalActudio:在此数据节点上定义的操作记录的最大数目

  • MaxNoOfLocalScans: Maximum number of fragment scans in parallel on this data node

    Max NoFoLoopSCANS:在这个数据节点上并行的片段扫描的最大数目

  • MaxNoOfOpenFiles: Maximum number of files open per data node.(One thread is created per file)

    MaxNoOfOpenFiles:每个数据节点打开文件的最大数目(每个文件创建一个线程)

  • MaxNoOfOrderedIndexes: Total number of ordered indexes that can be defined in the system

    maxnooforderedindexes:可以在系统中定义的有序索引总数

  • MaxNoOfSavedMessages: Maximum number of error messages to write in error log and maximum number of trace files to retain

    Max NoFoSaveDebug:错误日志中写入错误消息的最大数量和保留的跟踪文件的最大数量

  • MaxNoOfSubscribers: Maximum number of subscribers (default 0 = MaxNoOfTables * 2)

    MaxNoOfSubscribers:最大用户数(默认值0=Max NoFabTe* 2)

  • MaxNoOfSubscriptions: Maximum number of subscriptions (default 0 = MaxNoOfTables)

    Max订阅:最大订阅数(默认值0=Max NoFoT表)

  • MaxNoOfTables: Suggests a total number of NDB tables stored in the database

    maxnooftables:建议数据库中存储的ndb表总数

  • MaxNoOfTriggers: Total number of triggers that can be defined in the system

    maxnooftriggers:系统中可以定义的触发器总数

  • MaxNoOfUniqueHashIndexes: Total number of unique hash indexes that can be defined in the system

    maxNooFuniqueHashindexes:系统中可以定义的唯一哈希索引总数

  • MaxParallelCopyInstances: Number of parallel copies during node restarts. Default is 0, which uses number of LDMs on both nodes, to a maximum of 16.

    MaxParallelCopyInstances:节点重新启动期间的并行副本数。默认值是0,它使用两个节点上的LDMS的数量,最多为16。

  • MaxParallelScansPerFragment: Maximum number of parallel scans per fragment. Once this limit is reached, scans are serialized.

    最大并行扫描每个片段的数目。一旦达到此限制,扫描就会被序列化。

  • MaxReorgBuildBatchSize: Maximum scan batch size to use for reorganization of table partitions. Increasing this value may speed up table partition reorganization but impacts ongoing traffic as well.

    MaxReorgBuildBatchSize:用于表分区重组的最大扫描批量大小。增加这个值可能会加速表分区重组,但也会影响正在进行的通信量。

  • MaxStartFailRetries: Maximum retries when data node fails on startup, requires StopOnError = 0. Setting to 0 causes start attempts to continue indefinitely.

    Max StestFultRebug:在启动时数据节点失败时的最大重试,需要StOnCurror=0。设置为0会导致启动尝试无限期继续。

  • MaxUIBuildBatchSize: Maximum scan batch size to use for building unique keys. Increasing this value may speed up builds of unique keys but impacts ongoing traffic as well.

    MaxUIBuildBatchSize:用于构建唯一密钥的最大扫描批量大小。增加此值可能会加快唯一密钥的生成,但也会影响正在进行的通信量。

  • MemReportFrequency: Frequency of memory reports in seconds; 0 = report only when exceeding percentage limits

    memreportfrequency:内存报告的频率(秒);0=仅当超过百分比限制时才报告

  • MinDiskWriteSpeed: Minimum number of bytes per second that can be written by LCP and backup.

    mindiskwritespeed:LCP和备份每秒可写入的最小字节数。

  • MinFreePct: The percentage of memory resources to keep in reserve for restarts.

    minfreepct:为重新启动保留的内存资源百分比。

  • NodeGroup: Node group to which the data node belongs; used only during initial start of cluster.

    node group:数据节点所属的节点组,仅在集群初始启动时使用。

  • NodeId: Number uniquely identifying the data node among all nodes in the cluster.

    nodeid:唯一标识集群中所有节点中数据节点的数字。

  • NoOfFragmentLogFiles: Number of 16 MB redo log files in each of 4 file sets belonging to the data node

    noofframgentlogfiles:属于数据节点的4个文件集中每个文件集中的16 MB重做日志文件数

  • NoOfReplicas: Number of copies of all data in database; recommended value is 2 (default). Values greater than 2 are not supported in production.

    noofreplicas:数据库中所有数据的副本数;建议值为2(默认值)。生产中不支持大于2的值。

  • Numa: (Linux only; requires libnuma) Controls NUMA support. Setting to 0 permits system to determine use of interleaving by data node process; 1 means that it is determined by data node.

    numa:(仅限linux;需要libnuma)控件numa支持。设置为0允许系统通过数据节点进程确定交织的使用;1表示它由数据节点确定。

  • ODirect: Use O_DIRECT file reads and writes when possible.

    o direct:尽可能使用o_direct文件读写。

  • ODirectSyncFlag: O_DIRECT writes are treated as synchronized writes; ignored when ODirect is not enabled, InitFragmentLogFiles is set to SPARSE, or both.

    OrdTyStCyFr旗旗鼓:OxDead写入被视为同步写入;忽略ODirect未启用时,InitFragmentLogFiles设置为稀疏,或两者兼而有之。

  • RealtimeScheduler: When true, data node threads are scheduled as real-time threads. Default is false.

    realtimescheduler:如果为true,则数据节点线程被调度为实时线程。默认值为false。

  • RecoveryWork: Percentage of storage overhead for LCP files: greater value means less work in normal operations, more work during recovery

    RecoveryWork:LCP文件的存储开销百分比:值越大,意味着正常操作中的工作越少,恢复期间的工作越多

  • RedoBuffer: Number bytes on each data node allocated for writing redo logs

    redobuffer:每个数据节点上为写入重做日志而分配的字节数

  • RedoOverCommitCounter: When RedoOverCommitLimit has been exceeded this many times, transactions are aborted, and operations are handled as specified by DefaultOperationRedoProblemAction.

    redovercommitcounter:当多次超过redovercommitlimit时,事务将中止,操作将按照defaultoperationredoproblementaction指定的方式处理。

  • RedoOverCommitLimit: Each time that flushing the current redo buffer takes longer than this many seconds, the number of times that this has happened is compared to RedoOverCommitCounter.

    redovercommitlimit:每次刷新当前重做缓冲区所需的时间超过此秒数时,都将发生此情况的次数与redovercommitcounter进行比较。

  • RestartOnErrorInsert: Control the type of restart caused by inserting an error (when StopOnError is enabled)

    RestartOneRorInsert:控制插入错误导致的重新启动类型(当启用StopOneRor时)

  • SchedulerExecutionTimer: Number of microseconds to execute in scheduler before sending

    schedulerexecutiontimer:发送之前要在计划程序中执行的微秒数

  • SchedulerResponsiveness: Set NDB scheduler response optimization 0-10; higher values provide better response time but lower throughput

    schedulerresponsibility:设置ndb调度器响应优化0-10;值越大,响应时间越好,但吞吐量越低

  • SchedulerSpinTimer: Number of microseconds to execute in scheduler before sleeping

    SchedulerSpinTimer:在睡眠前要在计划程序中执行的微秒数

  • ServerPort: Port used to set up transporter for incoming connections from API nodes

    serverport:用于为来自api节点的传入连接设置传输程序的端口

  • SharedGlobalMemory: Total number of bytes on each data node allocated for any use

    sharedglobalmemory:为任何用途分配的每个数据节点上的字节总数

  • StartFailRetryDelay: Delay in seconds after start failure prior to retry; requires StopOnError = 0.

    StartFailRetryDelay:在重试之前,启动失败后的延迟(秒);需要StopOneRor=0。

  • StartFailureTimeout: Milliseconds to wait before terminating. (0=Wait forever)

    StartFailureTimeout:终止前等待的毫秒数。(0=永远等待)

  • StartNoNodeGroupTimeout: Time to wait for nodes without a nodegroup before trying to start (0=forever)

    startnonodegrouptimeout:在尝试启动之前等待没有节点组的节点的时间(0=永远)

  • StartPartialTimeout: Milliseconds to wait before trying to start without all nodes. (0=Wait forever)

    StartPartialTimeOut:尝试在没有所有节点的情况下启动之前等待的毫秒数。(0=永远等待)

  • StartPartitionedTimeout: Milliseconds to wait before trying to start partitioned. (0=Wait forever)

    StartPartitionedTimeout:尝试启动分区之前等待的毫秒数。(0=永远等待)

  • StartupStatusReportFrequency: Frequency of status reports during startup

    StartupStatusReportFrequency:启动期间状态报告的频率

  • StopOnError: When set to 0, the data node automatically restarts and recovers following node failures

    stoponerror:当设置为0时,数据节点会自动重新启动并恢复以下节点故障

  • StringMemory: Default size of string memory (0 to 100 = % of maximum, 101+ = actual bytes)

    StringMemory:字符串内存的默认大小(0到100=最大值,101 +=实际字节)

  • TcpBind_INADDR_ANY: Bind IP_ADDR_ANY so that connections can be made from anywhere (for autogenerated connections)

    tcpbind_inaddr_any:绑定ip_addr_any以便可以从任何地方建立连接(对于自动生成的连接)

  • TimeBetweenEpochs: Time between epochs (synchronization used for replication)

    time between epochs:时间间隔(用于复制的同步)

  • TimeBetweenEpochsTimeout: Timeout for time between epochs. Exceeding will cause node shutdown.

    timebetweeepochstimult:两个时间段之间的超时。超过将导致节点关闭。

  • TimeBetweenGlobalCheckpoints: Time between doing group commit of transactions to disk

    Timebetweeenglobalcheckpoints:将事务组提交到磁盘之间的时间

  • TimeBetweenGlobalCheckpointsTimeout: Minimum timeout for group commit of transactions to disk

    TimeBetweeengLobalcheckPointStimeout:将事务组提交到磁盘的最小超时

  • TimeBetweenInactiveTransactionAbortCheck: Time between checks for inactive transactions

    timebetweeninactivetransactionabortcheck:检查非活动事务的间隔时间

  • TimeBetweenLocalCheckpoints: Time between taking snapshots of the database (expressed in base-2 logarithm of bytes)

    TimebetweenLocalCheckpoints:拍摄数据库快照之间的时间(以字节的对数为基数表示)

  • TimeBetweenWatchDogCheck: Time between execution checks inside a data node

    timebetweenwatchdogcheck:数据节点内执行检查之间的时间

  • TimeBetweenWatchDogCheckInitial: Time between execution checks inside a data node (early start phases when memory is allocated)

    timebetweenwatchdogcheckinitial:数据节点内执行检查之间的时间(分配内存时的早期启动阶段)

  • TotalSendBufferMemory: Total memory to use for all transporter send buffers.

    TotalSendBufferMemory:用于所有传输程序发送缓冲区的总内存。

  • TransactionBufferMemory: Dynamic buffer space (in bytes) for key and attribute data allocated for each data node

    transactionbuffermemory:为每个数据节点分配的键和属性数据的动态缓冲区空间(字节)

  • TransactionDeadlockDetectionTimeout: Time transaction can spend executing within a data node. This is the time that the transaction coordinator waits for each data node participating in the transaction to execute a request. If the data node takes more than this amount of time, the transaction is aborted.

    TransactionDeadLockDetectionTimeout:事务在数据节点内执行所花费的时间。这是事务协调器等待参与事务的每个数据节点执行请求的时间。如果数据节点花费的时间超过此值,则事务将中止。

  • TransactionInactiveTimeout: Milliseconds that the application waits before executing another part of the transaction. This is the time the transaction coordinator waits for the application to execute or send another part (query, statement) of the transaction. If the application takes too much time, then the transaction is aborted. Timeout = 0 means that the application never times out.

    TransactioninActiveTimeout:应用程序在执行事务的另一部分之前等待的毫秒数。这是事务协调器等待应用程序执行或发送事务的另一部分(查询、语句)的时间。如果应用程序花费太多时间,则事务将中止。timeout=0表示应用程序从不超时。

  • TwoPassInitialNodeRestartCopy: Copy data in 2 passes during initial node restart, which enables multithreaded building of ordered indexes for such restarts.

    twopassinitialnoderestartcopy:在初始节点重新启动期间,将数据复制到2个过程中,这允许为此类重新启动创建多线程有序索引。

  • UndoDataBuffer: Number of bytes on each data node allocated for writing data undo logs

    undodatabuffer:每个数据节点上为写入数据撤消日志而分配的字节数

  • UndoIndexBuffer: Number of bytes on each data node allocated for writing index undo logs

    UndoIndexBuffer:每个数据节点上为写入索引撤消日志而分配的字节数

  • UseShm: Use shared memory connections between nodes

    useshm:在节点之间使用共享内存连接

The following parameters are specific to ndbmtd:

以下参数特定于ndbmtd:

  • MaxNoOfExecutionThreads: For ndbmtd only, specify maximum number of execution threads

    对于NdBMTD,只指定执行线程的最大数目

  • NoOfFragmentLogParts: Number of redo log file groups belonging to this data node; value must be an even multiple of 4.

    noofframgentlogparts:属于此数据节点的重做日志文件组数;值必须是4的偶数倍。

  • ThreadConfig: Used for configuration of multithreaded data nodes (ndbmtd). Default is an empty string; see documentation for syntax and other information.

    threadconfig:用于多线程数据节点(ndbmtd)的配置。默认为空字符串;有关语法和其他信息,请参阅文档。

21.3.2.2 NDB Cluster Management Node Configuration Parameters

The listing in this section provides information about parameters used in the [ndb_mgmd] or [mgm] section of a config.ini file for configuring NDB Cluster management nodes. For detailed descriptions and other additional information about each of these parameters, see Section 21.3.3.5, “Defining an NDB Cluster Management Server”.

本节中的列表提供了有关config.ini文件的[ndb_mgmd]或[mgm]节中用于配置ndb群集管理节点的参数的信息。有关这些参数的详细说明和其他附加信息,请参阅第21.3.3.5节“定义ndb群集管理服务器”。

  • ArbitrationDelay: When asked to arbitrate, arbitrator waits this long before voting (milliseconds)

    仲裁延迟:当要求仲裁时,仲裁员在投票前等待这么长时间(毫秒)

  • ArbitrationRank: If 0, then management node is not arbitrator. Kernel selects arbitrators in order 1, 2

    仲裁等级:如果为0,则管理节点不是仲裁器。内核按顺序1,2选择仲裁器

  • DataDir: Data directory for this node

    datadir:此节点的数据目录

  • ExecuteOnComputer: String referencing an earlier defined COMPUTER

    ExecuteOnComputer:引用早期定义的计算机的字符串

  • HeartbeatIntervalMgmdMgmd: Time between management node-to-management node heartbeats; the connection between management node is considered lost after 3 missed heartbeats.

    HeartBeatIntervalMGMDMGMD:管理节点到管理节点的心跳之间的时间;管理节点之间的连接在3个丢失的心跳之后被视为丢失。

  • HeartbeatThreadPriority: Set heartbeat thread policy and priority for management nodes; see manual for allowed values

    HeartBeatThreadPriority:为管理节点设置心跳线程策略和优先级;有关允许的值,请参阅手册

  • HostName: Host name or IP address for this management node.

    主机名:此管理节点的主机名或IP地址。

  • Id: Number identifying the management node (Id). Now deprecated; use NodeId instead.

    id:标识管理节点的编号(id)。现在已弃用;请改用nodeid。

  • LogDestination: Where to send log messages: console, system log, or specified log file

    logdestination:发送日志消息的位置:控制台、系统日志或指定的日志文件

  • NodeId: Number uniquely identifying the management node among all nodes in the cluster.

    nodeid:唯一标识集群中所有节点中管理节点的编号。

  • PortNumber: Port number to send commands to and fetch configuration from management server

    port number:向管理服务器发送命令和从管理服务器获取配置的端口号

  • PortNumberStats: Port number used to get statistical information from a management server

    portnumberstats:用于从管理服务器获取统计信息的端口号

  • TotalSendBufferMemory: Total memory to use for all transporter send buffers

    TotalSendBufferMemory:用于所有传输程序发送缓冲区的总内存

  • wan: Use WAN TCP setting as default

    广域网:使用广域网TCP设置作为默认设置

Note

After making changes in a management node's configuration, it is necessary to perform a rolling restart of the cluster for the new configuration to take effect. See Section 21.3.3.5, “Defining an NDB Cluster Management Server”, for more information.

在对管理节点的配置进行更改后,需要对群集执行滚动重新启动,以使新配置生效。有关详细信息,请参阅第21.3.3.5节“定义ndb群集管理服务器”。

To add new management servers to a running NDB Cluster, it is also necessary perform a rolling restart of all cluster nodes after modifying any existing config.ini files. For more information about issues arising when using multiple management nodes, see Section 21.1.7.10, “Limitations Relating to Multiple NDB Cluster Nodes”.

若要向运行的NDB集群添加新的管理服务器,还需要在修改现有的CONT.IN文件之后,对所有群集节点进行滚动重启。有关使用多个管理节点时出现的问题的更多信息,请参阅第21.1.7.10节“与多个ndb群集节点相关的限制”。

21.3.2.3 NDB Cluster SQL Node and API Node Configuration Parameters

The listing in this section provides information about parameters used in the [mysqld] and [api] sections of a config.ini file for configuring NDB Cluster SQL nodes and API nodes. For detailed descriptions and other additional information about each of these parameters, see Section 21.3.3.7, “Defining SQL and Other API Nodes in an NDB Cluster”.

本节中的列表提供了有关config.ini文件的[mysqld]和[api]部分中用于配置ndb集群sql节点和api节点的参数的信息。有关这些参数的详细说明和其他附加信息,请参阅第21.3.3.7节“在ndb集群中定义sql和其他api节点”。

  • ApiVerbose: Enable NDB API debugging; for NDB development

    apivebose:启用ndb api调试;用于ndb开发

  • ArbitrationDelay: When asked to arbitrate, arbitrator waits this many milliseconds before voting

    仲裁延迟:当要求仲裁时,仲裁员在投票前等待此毫秒数

  • ArbitrationRank: If 0, then API node is not arbitrator. Kernel selects arbitrators in order 1, 2

    仲裁等级:如果为0,则API节点不是仲裁器。内核按顺序1,2选择仲裁器

  • AutoReconnect: Specifies whether an API node should reconnect fully when disconnected from the cluster

    autoreconnect:指定当从集群断开连接时,api节点是否应该完全重新连接

  • BatchByteSize: The default batch size in bytes

    batchByteSize:默认批大小(字节)

  • BatchSize: The default batch size in number of records

    batchSize:记录数中的默认批大小

  • ConnectBackoffMaxTime: Specifies longest time in milliseconds (~100ms resolution) to allow between connection attempts to any given data node by this API node. Excludes time elapsed while connection attempts are ongoing, which in worst case can take several seconds. Disable by setting to 0. If no data nodes are currently connected to this API node, StartConnectBackoffMaxTime is used instead.

    connectbackoffmaxtime:指定允许此api节点尝试连接到任何给定数据节点之间的最长时间(毫秒(~100ms分辨率)。排除连接尝试正在进行时经过的时间,在最坏的情况下可能需要几秒钟。通过设置为0禁用。如果当前没有数据节点连接到此api节点,则改用startconnectbackoffmaxtime。

  • ConnectionMap: Specifies which data nodes to connect

    connectionmap:指定要连接的数据节点

  • DefaultHashMapSize: Set size (in buckets) to use for table hash maps. Three values are supported: 0, 240, and 3840. Intended primarily for upgrades and downgrades within NDB 7.2.

    DefaultHashMapSize:设置用于表哈希映射的大小(以桶为单位)。支持三个值:0、240和3840。主要用于ndb 7.2中的升级和降级。

  • DefaultOperationRedoProblemAction: How operations are handled in the event that RedoOverCommitCounter is exceeded

    DefaultOperationRedoProblemAction:在超过RedoOverCommitCounter时如何处理操作

  • ExecuteOnComputer: String referencing an earlier defined COMPUTER

    ExecuteOnComputer:引用早期定义的计算机的字符串

  • ExtraSendBufferMemory: Memory to use for send buffers in addition to any allocated by TotalSendBufferMemory or SendBufferMemory. Default (0) allows up to 16MB.

    ExtraSendBufferMemory:除了TotalSendBufferMemory或SendBufferMemory分配的任何内存外,还用于发送缓冲区的内存。默认值(0)允许高达16MB。

  • HeartbeatThreadPriority: Set heartbeat thread policy and priority for API nodes; see manual for allowed values

    HeartBeatThreadPriority:为API节点设置心跳线程策略和优先级;有关允许的值,请参阅手册

  • HostName: Host name or IP address for this SQL or API node.

    主机名:此SQL或API节点的主机名或IP地址。

  • Id: Number identifying MySQL server or API node (Id). Now deprecated; use NodeId instead.

    id:标识mysql服务器或api节点的编号(id)。现在已弃用;请改用nodeid。

  • MaxScanBatchSize: The maximum collective batch size for one scan

    最大扫描批量:一次扫描的最大集合批量

  • NodeId: Number uniquely identifying the SQL node or API node among all nodes in the cluster.

    nodeid:唯一标识集群中所有节点中的sql节点或api节点的编号。

  • StartConnectBackoffMaxTime: Same as ConnectBackoffMaxTime except that this parameter is used in its place if no data nodes are connected to this API node.

    startconnectbackoffmaxtime:与connectbackoffmaxtime相同,但如果没有数据节点连接到此api节点,则在其位置使用此参数。

  • TotalSendBufferMemory: Total memory to use for all transporter send buffers

    TotalSendBufferMemory:用于所有传输程序发送缓冲区的总内存

  • wan: Use WAN TCP setting as default

    广域网:使用广域网TCP设置作为默认设置

For a discussion of MySQL server options for NDB Cluster, see Section 21.3.3.9.1, “MySQL Server Options for NDB Cluster”. For information about MySQL server system variables relating to NDB Cluster, see Section 21.3.3.9.2, “NDB Cluster System Variables”.

有关ndb集群的mysql服务器选项的讨论,请参阅21.3.3.9.1节“ndb集群的mysql服务器选项”。有关与ndb集群相关的mysql服务器系统变量的信息,请参阅21.3.3.9.2节“ndb集群系统变量”。

Note

To add new SQL or API nodes to the configuration of a running NDB Cluster, it is necessary to perform a rolling restart of all cluster nodes after adding new [mysqld] or [api] sections to the config.ini file (or files, if you are using more than one management server). This must be done before the new SQL or API nodes can connect to the cluster.

要将新的SQL或API节点添加到正在运行的NDB群集的配置中,必须在向config.ini文件(或文件,如果您使用多个管理服务器)添加新的[mysqld]或[api]节之后,对所有群集节点执行滚动重新启动。这必须在新的sql或api节点连接到集群之前完成。

It is not necessary to perform any restart of the cluster if new SQL or API nodes can employ previously unused API slots in the cluster configuration to connect to the cluster.

如果新的SQL或API节点可以在群集配置中使用以前未使用的API插槽来连接到群集,则无需重新启动群集。

21.3.2.4 Other NDB Cluster Configuration Parameters

The listings in this section provide information about parameters used in the [computer], [tcp], and [shm] sections of a config.ini file for configuring NDB Cluster. For detailed descriptions and additional information about individual parameters, see Section 21.3.3.10, “NDB Cluster TCP/IP Connections”, or Section 21.3.3.12, “NDB Cluster Shared Memory Connections”, as appropriate.

本节中的列表提供了有关config.ini文件的[computer]、[tcp]和[shm]节中用于配置ndb集群的参数的信息。有关各个参数的详细说明和附加信息,请参见第21.3.3.10节“NDB群集TCP/IP连接”或第21.3.3.12节“NDB群集共享内存连接”。

The following parameters apply to the config.ini file's [computer] section:

以下参数适用于config.ini文件的[计算机]部分:

  • HostName: Host name or IP address of this computer.

    主机名:此计算机的主机名或IP地址。

  • Id: A unique identifier for this computer.

    ID:此计算机的唯一标识符。

The following parameters apply to the config.ini file's [tcp] section:

以下参数适用于config.ini文件的[tcp]部分:

  • Checksum: If checksum is enabled, all signals between nodes are checked for errors

    校验和:如果启用校验和,则检查节点之间的所有信号是否有错误

  • Group: Used for group proximity; smaller value is interpreted as being closer

    组:用于组接近度;较小值被解释为更接近。

  • NodeId1: ID of node (data node, API node, or management node) on one side of the connection

    nodeid1:连接一侧的节点(数据节点、api节点或管理节点)的id

  • NodeId2: ID of node (data node, API node, or management node) on one side of the connection

    nodeid2:连接一侧的节点(数据节点、api节点或管理节点)的id

  • NodeIdServer: Set server side of TCP connection

    nodeidserver:设置TCP连接的服务器端

  • OverloadLimit: When more than this many unsent bytes are in the send buffer, the connection is considered overloaded.

    重载限制:当发送缓冲区中有这么多未发送的字节时,连接将被视为重载。

  • PortNumber: Port used for this transporter (DEPRECATED)

    端口号:用于此传输程序的端口(已弃用)

  • PreSendChecksum: If this parameter and Checksum are both enabled, perform pre-send checksum checks, and check all TCP signals between nodes for errors

    pre send checksum:如果此参数和校验和都已启用,则执行发送前校验和检查,并检查节点之间的所有TCP信号是否有错误

  • Proxy:

    代理:

  • ReceiveBufferMemory: Bytes of buffer for signals received by this node

    ReceiveBufferMemory:此节点接收信号的缓冲字节数

  • SendBufferMemory: Bytes of TCP buffer for signals sent from this node

    sendbuffermemory:从该节点发送的信号的TCP缓冲区字节数

  • SendSignalId: Sends ID in each signal. Used in trace files. Defaults to true in debug builds.

    sendSignalID:在每个信号中发送ID。用于跟踪文件。在调试生成中默认为true。

  • TCP_MAXSEG_SIZE: Value used for TCP_MAXSEG

    tcp_maxseg_size:用于tcp_maxseg的值

  • TCP_RCV_BUF_SIZE: Value used for SO_RCVBUF

    tcp_rcv_buf_size:用于so_rcvbuf的值

  • TCP_SND_BUF_SIZE: Value used for SO_SNDBUF

    tcp_snd_buf_size:用于so_sndbuf的值

  • TcpBind_INADDR_ANY: Bind InAddrAny instead of host name for server part of connection

    tcpbind_inaddr_any:为连接的服务器部分绑定inaddrany而不是主机名

The following parameters apply to the config.ini file's [shm] section:

以下参数适用于config.ini文件的[shm]部分:

  • Checksum: If checksum is enabled, all signals between nodes are checked for errors

    校验和:如果启用校验和,则检查节点之间的所有信号是否有错误

  • Group:

    组别:

  • NodeId1: ID of node (data node, API node, or management node) on one side of the connection

    nodeid1:连接一侧的节点(数据节点、api节点或管理节点)的id

  • NodeId2: ID of node (data node, API node, or management node) on one side of the connection

    nodeid2:连接一侧的节点(数据节点、api节点或管理节点)的id

  • NodeIdServer: Set server side of SHM connection

    nodeidserver:设置shm连接的服务器端

  • OverloadLimit: When more than this many unsent bytes are in the send buffer, the connection is considered overloaded.

    重载限制:当发送缓冲区中有这么多未发送的字节时,连接将被视为重载。

  • PortNumber: Port used for this transporter (DEPRECATED)

    端口号:用于此传输程序的端口(已弃用)

  • PreSendChecksum: If this parameter and Checksum are both enabled, perform pre-send checksum checks, and check all SHM signals between nodes for errors

    pre send checksum:如果此参数和校验和都已启用,则执行发送前校验和检查,并检查节点之间的所有shm信号是否有错误

  • SendBufferMemory: Bytes in shared memory buffer for signals sent from this node

    sendbuffermemory:用于从该节点发送信号的共享内存缓冲区中的字节

  • SendSignalId: Sends ID in each signal. Used in trace files.

    sendSignalID:在每个信号中发送ID。用于跟踪文件。

  • ShmKey: A shared memory key; when set to 1, this is calculated by NDB

    shmkey:共享内存键;当设置为1时,由ndb计算

  • ShmSpinTime: When receiving, number of microseconds to spin before sleeping

    shmspintime:接收时,睡眠前旋转的微秒数

  • ShmSize: Size of shared memory segment

    shmsize:共享内存段的大小

  • Signum: Signal number to be used for signalling

    signum:用于发送信号的信号号

21.3.2.5 NDB Cluster mysqld Option and Variable Reference

The following table provides a list of the command-line options, server and status variables applicable within mysqld when it is running as an SQL node in an NDB Cluster. For a table showing all command-line options, server and status variables available for use with mysqld, see Section 5.1.3, “Server Option, System Variable, and Status Variable Reference”.

下表提供了mysqld作为ndb集群中的sql节点运行时适用的命令行选项、服务器和状态变量的列表。有关显示可与mysqld一起使用的所有命令行选项、服务器和状态变量的表,请参阅第5.1.3节“服务器选项、系统变量和状态变量参考”。

  • Com_show_ndb_status: Count of SHOW NDB STATUS statements

    显示ndb状态:显示ndb状态语句的计数

  • Handler_discover: Number of times that tables have been discovered

    handler_discover:发现表的次数

  • ndb-batch-size: Size (in bytes) to use for NDB transaction batches

    ndb批处理大小:用于ndb事务批处理的大小(字节)

  • ndb-blob-read-batch-bytes: Specifies size in bytes that large BLOB reads should be batched into. 0 = no limit.

    ndb blob read batch bytes:指定大blob读取应批处理到的字节大小。0=无限制。

  • ndb-blob-write-batch-bytes: Specifies size in bytes that large BLOB writes should be batched into. 0 = no limit.

    ndb blob write batch bytes:指定大blob写入应批处理到的字节大小。0=无限制。

  • ndb-cluster-connection-pool: Number of connections to the cluster used by MySQL

    ndb cluster connection pool:mysql使用的到集群的连接数

  • ndb-cluster-connection-pool-nodeids: Comma-separated list of node IDs for connections to the cluster used by MySQL; the number of nodes in the list must be the same as the value set for --ndb-cluster-connection-pool

    ndb cluster connection pool node ids:MySQL使用的连接到群集的节点ID的逗号分隔列表;列表中的节点数必须与为--ndb cluster connection pool设置的值相同

  • ndb-connectstring: Point to the management server that distributes the cluster configuration

    ndb connectstring:指向分发群集配置的管理服务器

  • ndb-default-column-format: Use this value (FIXED or DYNAMIC) by default for COLUMN_FORMAT and ROW_FORMAT options when creating or adding columns to a table.

    ndb default column format:在创建列或向表中添加列时,将此值(固定或动态)默认用于column_format和row_format选项。

  • ndb-deferred-constraints: Specifies that constraint checks on unique indexes (where these are supported) should be deferred until commit time. Not normally needed or used; for testing purposes only.

    ndb deferred constraints:指定对唯一索引(在支持这些索引的地方)的约束检查应推迟到提交时间。通常不需要或不使用;仅用于测试目的。

  • ndb-distribution: Default distribution for new tables in NDBCLUSTER (KEYHASH or LINHASH, default is KEYHASH)

    ndb分布:ndbcluster中新表的默认分布(keyhash或linhash,默认为keyhash)

  • ndb-log-apply-status: Cause a MySQL server acting as a slave to log mysql.ndb_apply_status updates received from its immediate master in its own binary log, using its own server ID. Effective only if the server is started with the --ndbcluster option.

    ndb log apply status:使一个mysql服务器作为一个从服务器来记录mysql。ndb_apply_状态更新从它的直接主机接收到,在它自己的二进制日志中,使用它自己的服务器ID。只有当服务器使用--ndbcluster选项启动时才有效。

  • ndb-log-empty-epochs: When enabled, causes epochs in which there were no changes to be written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

    ndb log empty epochs:启用时,即使启用了log_slave_updates,也会导致未对ndb_apply_状态和ndb_binlog_索引表进行任何更改的时段。

  • ndb-log-empty-update: When enabled, causes updates that produced no changes to be written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

    ndb log empty update:启用时,将导致对ndb_apply_状态和ndb_binlog_索引表不产生任何更改的更新,即使启用了log_slave_更新。

  • ndb-log-exclusive-reads: Log primary key reads with exclusive locks; allow conflict resolution based on read conflicts

    ndb log exclusive reads:使用独占锁记录主键读取;允许基于读取冲突解决冲突

  • ndb-log-orig: Log originating server id and epoch in mysql.ndb_binlog_index table

    ndb log orig:mysql.ndb_binlog_索引表中的日志发起服务器id和epoch

  • ndb-log-transaction-id: Write NDB transaction IDs in the binary log. Requires --log-bin-v1-events=OFF.

    ndb log transaction id:在二进制日志中写入ndb事务id。需要--log-bin-v1-events=off。

  • ndb-log-update-as-write: Toggles logging of updates on the master between updates (OFF) and writes (ON)

    ndb log update as write:在更新(关闭)和写入(打开)之间切换主控上更新的日志记录

  • ndb-mgmd-host: Set the host (and port, if desired) for connecting to management server

    ndb mgmd host:设置用于连接管理服务器的主机(和端口,如果需要)

  • ndb-nodeid: NDB Cluster node ID for this MySQL server

    ndb node id:此MySQL服务器的ndb群集节点ID

  • ndb-recv-thread-activation-threshold: Activation threshold when receive thread takes over the polling of the cluster connection (measured in concurrently active threads)

    ndb recv thread activation threshold:当接收线程接管群集连接的轮询时的激活阈值(以并发活动线程度量)

  • ndb-recv-thread-cpu-mask: CPU mask for locking receiver threads to specific CPUs; specified as hexadecimal. See documentation for details.

    ndb recv thread cpu mask:用于将接收器线程锁定到特定CPU的CPU掩码;指定为十六进制。有关详细信息,请参见文档。

  • ndb-transid-mysql-connection-map: Enable or disable the ndb_transid_mysql_connection_map plugin; that is, enable or disable the INFORMATION_SCHEMA table having that name

    ndb transid mysql连接映射:启用或禁用ndb transid mysql连接映射插件;即,启用或禁用具有该名称的信息架构表

  • ndb-wait-connected: Time (in seconds) for the MySQL server to wait for connection to cluster management and data nodes before accepting MySQL client connections

    ndb wait connected:mysql服务器在接受mysql客户端连接之前等待连接到群集管理和数据节点的时间(秒)

  • ndb-wait-setup: Time (in seconds) for the MySQL server to wait for NDB engine setup to complete

    ndb wait setup:mysql服务器等待ndb引擎安装完成的时间(秒)

  • ndb-allow-copying-alter-table: Set to OFF to keep ALTER TABLE from using copying operations on NDB tables

    ndb allow copying alter table:设置为off以防止alter table对ndb表使用复制操作

  • Ndb_api_bytes_received_count: Amount of data (in bytes) received from the data nodes by this MySQL Server (SQL node)

    ndb_api_bytes_received_count:此MySQL服务器(SQL节点)从数据节点接收的数据量(字节)

  • Ndb_api_bytes_received_count_session: Amount of data (in bytes) received from the data nodes in this client session

    ndb_api_bytes_received_count_session:在此客户端会话中从数据节点接收的数据量(字节)

  • Ndb_api_bytes_received_count_slave: Amount of data (in bytes) received from the data nodes by this slave

    ndb_api_bytes_received_count_slave:此从节点从数据节点接收的数据量(字节)

  • Ndb_api_bytes_sent_count: Amount of data (in bytes) sent to the data nodes by this MySQL Server (SQL node)

    ndb_api_bytes_sent_count:此MySQL服务器(SQL节点)发送到数据节点的数据量(字节)

  • Ndb_api_bytes_sent_count_session: Amount of data (in bytes) sent to the data nodes in this client session

    ndb_api_bytes_sent_count_session:发送到此客户端会话中的数据节点的数据量(字节)

  • Ndb_api_bytes_sent_count_slave: Amount of data (in bytes) sent to the data nodes by this slave

    ndb_api_bytes_sent_count_slave:此slave发送到数据节点的数据量(字节)

  • Ndb_api_event_bytes_count: Number of bytes of events received by this MySQL Server (SQL node)

    ndb_api_event_bytes_count:此MySQL服务器(SQL节点)接收的事件字节数

  • Ndb_api_event_bytes_count_injector: Number of bytes of events received by the NDB binary log injector thread

    ndb_api_event_bytes_count_injector:ndb二进制日志注入器线程接收的事件字节数

  • Ndb_api_event_data_count: Number of row change events received by this MySQL Server (SQL node)

    ndb_api_event_data_count:此MySQL服务器(SQL节点)接收的行更改事件数

  • Ndb_api_event_data_count_injector: Number of row change events received by the NDB binary log injector thread

    ndb_api_event_data_count_injector:ndb二进制日志注入器线程接收的行更改事件数

  • Ndb_api_event_nondata_count: Number of events received, other than row change events, by this MySQL Server (SQL node)

    ndb_api_event_nondata_count:此MySQL服务器(SQL节点)接收的事件数(行更改事件除外)

  • Ndb_api_event_nondata_count_injector: Number of events received, other than row change events, by the NDB binary log injector thread

    ndb_api_event_nondata_count_injector:ndb二进制日志注入器线程接收的事件数(行更改事件除外)

  • Ndb_api_pk_op_count: Number of operations based on or using primary keys by this MySQL Server (SQL node)

    ndb_api_pk_op_count:基于或使用此mysql服务器主键的操作数(sql节点)

  • Ndb_api_pk_op_count_session: Number of operations based on or using primary keys in this client session

    ndb_api_pk_op_count_会话:基于或使用此客户端会话中的主键的操作数

  • Ndb_api_pk_op_count_slave: Number of operations based on or using primary keys by this slave

    ndb_api_pk_op_count_slave:基于主键或使用主键的操作数

  • Ndb_api_pruned_scan_count: Number of scans that have been pruned to a single partition by this MySQL Server (SQL node)

    ndb_api_pruned_scan_count:此MySQL服务器(SQL节点)已修剪到单个分区的扫描数

  • Ndb_api_pruned_scan_count_session: Number of scans that have been pruned to a single partition in this client session

    ndb_api_pruned_scan_count_session:此客户端会话中已修剪到单个分区的扫描数

  • Ndb_api_pruned_scan_count_slave: Number of scans that have been pruned to a single partition by this slave

    ndb_api_pruned_scan_count_slave:此slave已修剪到单个分区的扫描数

  • Ndb_api_range_scan_count: Number of range scans that have been started by this MySQL Server (SQL node)

    ndb_api_range_scan_count:此MySQL服务器(SQL节点)已启动的范围扫描数

  • Ndb_api_range_scan_count_session: Number of range scans that have been started in this client session

    ndb_api_range_scan_count_会话:在此客户端会话中启动的范围扫描数

  • Ndb_api_range_scan_count_slave: Number of range scans that have been started by this slave

    ndb_api_range_scan_count_slave:此slave已启动的范围扫描数

  • Ndb_api_read_row_count: Total number of rows that have been read by this MySQL Server (SQL node)

    ndb_api_read_row_count:此MySQL服务器(SQL节点)已读取的行总数

  • Ndb_api_read_row_count_session: Total number of rows that have been read in this client session

    ndb_api_read_row_count_会话:已在此客户端会话中读取的行总数

  • Ndb_api_read_row_count_slave: Total number of rows that have been read by this slave

    ndb_api_read_row_count_slave:此slave已读取的行总数

  • Ndb_api_scan_batch_count: Number of batches of rows received by this MySQL Server (SQL node)

    ndb_api_scan_batch_count:此MySQL服务器(SQL节点)接收的行批数

  • Ndb_api_scan_batch_count_session: Number of batches of rows received in this client session

    ndb_api_scan_batch_count_session:在此客户端会话中接收的行批数

  • Ndb_api_scan_batch_count_slave: Number of batches of rows received by this slave

    ndb_api_scan_batch_count_slave:此slave接收的行的批数

  • Ndb_api_table_scan_count: Number of table scans that have been started, including scans of internal tables, by this MySQL Server (SQL node)

    ndb_api_table_scan_count:此MySQL服务器(SQL节点)已启动的表扫描数,包括内部表的扫描数

  • Ndb_api_table_scan_count_session: Number of table scans that have been started, including scans of internal tables, in this client session

    ndb_api_table_scan_count_会话:在此客户端会话中已启动的表扫描数,包括内部表的扫描数

  • Ndb_api_table_scan_count_slave: Number of table scans that have been started, including scans of internal tables, by this slave

    ndb_api_table_scan_count_slave:此从机已启动的表扫描数,包括内部表的扫描数

  • Ndb_api_trans_abort_count: Number of transactions aborted by this MySQL Server (SQL node)

    ndb_api_trans_abort_count:此MySQL服务器(SQL节点)中止的事务数

  • Ndb_api_trans_abort_count_session: Number of transactions aborted in this client session

    ndb_api_trans_abort_count_会话:在此客户端会话中中止的事务数

  • Ndb_api_trans_abort_count_slave: Number of transactions aborted by this slave

    ndb_api_trans_abort_count_slave:此slave中止的事务数

  • Ndb_api_trans_close_count: Number of transactions aborted (may be greater than the sum of TransCommitCount and TransAbortCount) by this MySQL Server (SQL node)

    ndb_api_trans_close_count:此mysql服务器(sql节点)中止的事务数(可能大于transcommitcount和transbortcount的总和)

  • Ndb_api_trans_close_count_session: Number of transactions aborted (may be greater than the sum of TransCommitCount and TransAbortCount) in this client session

    ndb_api_trans_close_count_会话:此客户端会话中中止的事务数(可能大于transcommitcount和transbortcount的总和)

  • Ndb_api_trans_close_count_slave: Number of transactions aborted (may be greater than the sum of TransCommitCount and TransAbortCount) by this slave

    ndb_api_trans_close_count_slave:此slave中止的事务数(可能大于transcommitcount和transbortcount的总和)

  • Ndb_api_trans_commit_count: Number of transactions committed by this MySQL Server (SQL node)

    ndb_api_trans_commit_count:此MySQL服务器(SQL节点)提交的事务数

  • Ndb_api_trans_commit_count_session: Number of transactions committed in this client session

    ndb_api_trans_commit_count_会话:在此客户端会话中提交的事务数

  • Ndb_api_trans_commit_count_slave: Number of transactions committed by this slave

    ndb_api_trans_commit_count_slave:此slave提交的事务数

  • Ndb_api_trans_local_read_row_count: Total number of rows that have been read by this MySQL Server (SQL node)

    ndb_api_trans_local_read_row_count:此MySQL服务器(SQL节点)已读取的行总数

  • Ndb_api_trans_local_read_row_count_session: Total number of rows that have been read in this client session

    ndb_api_trans_local_read_row_count_session:已在此客户端会话中读取的行总数

  • Ndb_api_trans_local_read_row_count_slave: Total number of rows that have been read by this slave

    ndb_api_trans_local_read_row_count_slave:此slave已读取的行总数

  • Ndb_api_trans_start_count: Number of transactions started by this MySQL Server (SQL node)

    ndb_api_trans_start_count:此MySQL服务器(SQL节点)启动的事务数

  • Ndb_api_trans_start_count_session: Number of transactions started in this client session

    ndb_api_trans_start_count_会话:在此客户端会话中启动的事务数

  • Ndb_api_trans_start_count_slave: Number of transactions started by this slave

    ndb_api_trans_start_count_slave:此slave启动的事务数

  • Ndb_api_uk_op_count: Number of operations based on or using unique keys by this MySQL Server (SQL node)

    ndb_api_uk_op_count:基于或使用此mysql服务器(sql节点)的唯一密钥的操作数

  • Ndb_api_uk_op_count_session: Number of operations based on or using unique keys in this client session

    ndb_api_uk_op_count_会话:基于或使用此客户端会话中的唯一密钥的操作数

  • Ndb_api_uk_op_count_slave: Number of operations based on or using unique keys by this slave

    ndb_api_uk_op_count_slave:基于或使用此slave的唯一密钥的操作数

  • Ndb_api_wait_exec_complete_count: Number of times thread has been blocked while waiting for execution of an operation to complete by this MySQL Server (SQL node)

    ndb_api_wait_exec_complete_count:等待此MySQL服务器(SQL节点)完成操作时线程被阻止的次数

  • Ndb_api_wait_exec_complete_count_session: Number of times thread has been blocked while waiting for execution of an operation to complete in this client session

    ndb_api_wait_exec_complete_count_会话:在此客户端会话中等待操作完成时线程被阻止的次数

  • Ndb_api_wait_exec_complete_count_slave: Number of times thread has been blocked while waiting for execution of an operation to complete by this slave

    ndb_api_wait_exec_complete_count_slave:等待此从机完成操作时线程被阻塞的次数

  • Ndb_api_wait_meta_request_count: Number of times thread has been blocked waiting for a metadata-based signal by this MySQL Server (SQL node)

    ndb_api_wait_meta_request_count:此mysql服务器(sql节点)已阻止线程等待基于元数据的信号的次数

  • Ndb_api_wait_meta_request_count_session: Number of times thread has been blocked waiting for a metadata-based signal in this client session

    ndb_api_wait_meta_request_count_session:在此客户端会话中等待基于元数据的信号的线程被阻止的次数

  • Ndb_api_wait_meta_request_count_slave: Number of times thread has been blocked waiting for a metadata-based signal by this slave

    ndb_api_wait_meta_request_count_slave:线程被阻塞的次数,等待此从机基于元数据的信号

  • Ndb_api_wait_nanos_count: Total time (in nanoseconds) spent waiting for some type of signal from the data nodes by this MySQL Server (SQL node)

    ndb_api_wait_nanos_count:此mysql服务器(sql节点)等待来自数据节点的某种信号所花费的总时间(以纳秒为单位)

  • Ndb_api_wait_nanos_count_session: Total time (in nanoseconds) spent waiting for some type of signal from the data nodes in this client session

    ndb_api_wait_nanos_count_会话:在此客户端会话中等待来自数据节点的某种类型的信号所花费的总时间(以纳秒为单位)

  • Ndb_api_wait_nanos_count_slave: Total time (in nanoseconds) spent waiting for some type of signal from the data nodes by this slave

    ndb_api_wait_nanos_count_slave:此从节点等待来自数据节点的某种类型信号所用的总时间(以纳秒为单位)

  • Ndb_api_wait_scan_result_count: Number of times thread has been blocked while waiting for a scan-based signal by this MySQL Server (SQL node)

    ndb_api_wait_scan_result_count:等待此MySQL服务器(SQL节点)基于扫描的信号时线程被阻止的次数

  • Ndb_api_wait_scan_result_count_session: Number of times thread has been blocked while waiting for a scan-based signal in this client session

    ndb_api_wait_scan_result_count_会话:在此客户端会话中等待基于扫描的信号时线程被阻止的次数

  • Ndb_api_wait_scan_result_count_slave: Number of times thread has been blocked while waiting for a scan-based signal by this slave

    ndb_api_wait_scan_result_count_slave:等待此slave基于扫描的信号时线程被阻塞的次数

  • ndb_autoincrement_prefetch_sz: NDB auto-increment prefetch size

    ndb_auto increment_prefetch_sz:ndb auto increment prefetch size

  • ndb_cache_check_time: Number of milliseconds between checks of cluster SQL nodes made by the MySQL query cache

    ndb_cache_check_time:mysql查询缓存对集群sql节点的检查间隔毫秒数

  • ndb_clear_apply_status: Causes RESET SLAVE to clear all rows from the ndb_apply_status table; ON by default

    ndb_clear_apply_status:使reset slave清除ndb_apply_status表中的所有行;默认为on

  • Ndb_cluster_node_id: If the server is acting as an NDB Cluster node, then the value of this variable its node ID in the cluster

    ndb_cluster_node_id:如果服务器充当ndb集群节点,则此变量的值其在集群中的节点id

  • Ndb_config_from_host: The host name or IP address of the Cluster management server Formerly Ndb_connected_host

    ndb_config_from_host:以前的ndb_connected_主机的群集管理服务器的主机名或IP地址

  • Ndb_config_from_port: The port for connecting to Cluster management server. Formerly Ndb_connected_port

    ndb_config_from_port:连接群集管理服务器的端口。以前的ndb_连接端口

  • Ndb_conflict_fn_epoch: Number of rows that have been found in conflict by the NDB$EPOCH() conflict detection function

    ndb_conflict_fn_epoch:ndb$epoch()冲突检测函数在冲突中找到的行数

  • Ndb_conflict_fn_epoch2: Number of rows that have been found in conflict by the NDB$EPOCH2() conflict detection function

    ndb_conflict_fn_epoch2:ndb$epoch2()冲突检测函数在冲突中找到的行数

  • Ndb_conflict_fn_epoch2_trans: Number of rows that have been found in conflict by the NDB$EPOCH2_TRANS() conflict detection function

    ndb_conflict_fn_epoch2_trans:ndb$epoch2_trans()冲突检测函数在冲突中找到的行数

  • Ndb_conflict_fn_epoch_trans: Number of rows that have been found in conflict by the NDB$EPOCH_TRANS() conflict detection function

    ndb_conflict_fn_epoch_trans:ndb$epoch_trans()冲突检测函数在冲突中发现的行数

  • Ndb_conflict_fn_max: If the server is part of an NDB Cluster involved in cluster replication, the value of this variable indicates the number of times that conflict resolution based on "greater timestamp wins" has been applied

    ndb_conflict_fn_max:如果服务器是参与群集复制的ndb群集的一部分,则此变量的值指示基于“更大时间戳wins”应用冲突解决的次数

  • Ndb_conflict_fn_old: If the server is part of an NDB Cluster involved in cluster replication, the value of this variable indicates the number of times that "same timestamp wins" conflict resolution has been applied

    ndb_conflict_fn_old:如果服务器是参与群集复制的ndb群集的一部分,则此变量的值表示已应用“相同时间戳赢”冲突解决的次数

  • Ndb_conflict_last_conflict_epoch: Most recent NDB epoch on this slave in which a conflict was detected

    ndb_conflict_last_conflict_epoch:检测到冲突的此从属服务器上的最新ndb epoch

  • Ndb_conflict_last_stable_epoch: Number of rows found to be in conflict by a transactional conflict function

    ndb_conflict_last_stable_epoch:事务冲突函数发现冲突的行数

  • Ndb_conflict_reflected_op_discard_count: Number of reflected operations that were not applied due an error during execution

    ndb_conflict_reflected_op_discard_count:由于执行期间出错而未应用的反射操作数

  • Ndb_conflict_reflected_op_prepare_count: Number of reflected operations received that have been prepared for execution

    ndb_conflict_reflected_op_prepare_count:接收到的已准备好执行的反射操作数

  • Ndb_conflict_refresh_op_count: Number of refresh operations that have been prepared

    ndb_conflict_refresh_op_count:已准备的刷新操作数

  • Ndb_conflict_trans_conflict_commit_count: Number of epoch transactions committed after requiring transactional conflict handling

    ndb_conflict_trans_conflict_commit_count:要求处理事务冲突后提交的epoch事务数

  • Ndb_conflict_trans_detect_iter_count: Number of internal iterations required to commit an epoch transaction. Should be (slightly) greater than or equal to Ndb_conflict_trans_conflict_commit_count

    ndb_conflict_trans_detect_iter_count:提交epoch事务所需的内部迭代次数。应(略)大于或等于ndb_conflict_trans_conflict_commit_count

  • Ndb_conflict_trans_reject_count: Number of transactions rejected after being found in conflict by a transactional conflict function

    ndb_conflict_trans_reject_count:事务冲突函数发现冲突后被拒绝的事务数

  • Ndb_conflict_trans_row_conflict_count: Number of rows found in conflict by a transactional conflict function. Includes any rows included in or dependent on conflicting transactions.

    ndb_conflict_trans_row_conflict_count:事务冲突函数在冲突中找到的行数。包括冲突事务中包含的或依赖于冲突事务的任何行。

  • Ndb_conflict_trans_row_reject_count: Total number of rows realigned after being found in conflict by a transactional conflict function. Includes Ndb_conflict_trans_row_conflict_count and any rows included in or dependent on conflicting transactions.

    ndb_conflict_trans_row_reject_count:事务冲突函数在冲突中找到后重新调整的行总数。包括ndb_conflict_trans_row_conflict_count和包含在冲突事务中或依赖于冲突事务的任何行。

  • ndb_data_node_neighbour: Specifies cluster data node "closest" to this MySQL Server, for transaction hinting and fully replicated tables

    ndb_data_node_neighbor:指定离此mysql服务器“最近”的集群数据节点,用于事务提示和完全复制表

  • ndb_default_column_format: Sets default row format and column format (FIXED or DYNAMIC) used for new NDB tables

    ndb_default_column_format:设置用于新ndb表的默认行格式和列格式(固定或动态)

  • ndb_deferred_constraints: Specifies that constraint checks should be deferred (where these are supported). Not normally needed or used; for testing purposes only.

    ndb_deferred_constraints:指定应延迟约束检查(在支持这些检查的情况下)。通常不需要或不使用;仅用于测试目的。

  • ndb_distribution: Default distribution for new tables in NDBCLUSTER (KEYHASH or LINHASH, default is KEYHASH)

    ndb_分布:ndbcluster中新表的默认分布(keyhash或linhash,默认为keyhash)

  • Ndb_conflict_delete_delete_count: Number of delete-delete conflicts detected (delete operation is applied, but row does not exist)

    NDbjRuleCtTyDeleTeaDeleTeEngCurn:检测到删除删除冲突的次数(删除操作被应用,但行不存在)

  • ndb_eventbuffer_free_percent: Percentage of free memory that should be available in event buffer before resumption of buffering, after reaching limit set by ndb_eventbuffer_max_alloc

    ndb_event buffer_free_percent:在达到ndb_eventbuffer_max_alloc设置的限制之后,在恢复缓冲之前,事件缓冲区中应该可用的可用内存百分比。

  • ndb_eventbuffer_max_alloc: Maximum memory that can be allocated for buffering events by the NDB API. Defaults to 0 (no limit).

    NdByEngPoFrErr.Max SoLoc:可以通过NDB API分配缓冲事件的最大内存。默认为0(无限制)。

  • Ndb_execute_count: Provides the number of round trips to the NDB kernel made by operations

    ndb_execute_count:提供操作到ndb内核的往返次数

  • ndb_extra_logging: Controls logging of NDB Cluster schema, connection, and data distribution events in the MySQL error log

    ndb_extra_logging:控制mysql错误日志中ndb集群模式、连接和数据分发事件的日志记录

  • ndb_force_send: Forces sending of buffers to NDB immediately, without waiting for other threads

    ndb_force_send:强制立即将缓冲区发送到ndb,而不等待其他线程

  • ndb_fully_replicated: Whether new NDB tables are fully replicated

    ndb_fully_replicated:是否完全复制新的ndb表

  • ndb_index_stat_enable: Use NDB index statistics in query optimization

    ndb_index_stat_enable:在查询优化中使用ndb index statistics

  • ndb_index_stat_option: Comma-separated list of tunable options for NDB index statistics; the list should contain no spaces

    ndb_index_stat_option:用于ndb索引统计的可调选项的逗号分隔列表;该列表不应包含空格

  • ndb_join_pushdown: Enables pushing down of joins to data nodes

    ndb_join_pushdown:允许下推连接到数据节点

  • ndb_log_apply_status: Whether or not a MySQL server acting as a slave logs mysql.ndb_apply_status updates received from its immediate master in its own binary log, using its own server ID

    ndb_log_apply_status:作为从属服务器的mysql服务器是否记录mysql。ndb_apply_status更新使用自己的服务器id从其直接主服务器在自己的二进制日志中接收

  • ndb_log_bin: Write updates to NDB tables in the binary log. Effective only if binary logging is enabled with --log-bin.

    ndb_log_bin:在二进制日志中写入对ndb表的更新。仅当使用--log bin启用二进制日志记录时有效。

  • ndb_log_binlog_index: Insert mapping between epochs and binary log positions into the ndb_binlog_index table. Defaults to ON. Effective only if binary logging is enabled on the server.

    ndb_log_binlog_index:将纪元和二进制日志位置之间的映射插入ndb_binlog_index表。默认为打开。只有在服务器上启用二进制日志记录时才有效。

  • ndb_log_empty_epochs: When enabled, epochs in which there were no changes are written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

    ndb_log_empty_epochs:启用时,即使启用了log_slave_updates,也不会写入ndb_apply_状态和ndb_binlog_索引表中没有更改的记录。

  • ndb_log_empty_update: When enabled, updates which produce no changes are written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

    ndb_logu_empty_update:启用时,即使启用了logu slave_updates,也会将不会产生任何更改的更新写入ndb_apply_状态和ndb_binlog_索引表。

  • ndb_log_exclusive_reads: Log primary key reads with exclusive locks; allow conflict resolution based on read conflicts

    ndb_log_exclusive_reads:使用独占锁记录主键读取;允许基于读取冲突解决冲突

  • ndb_log_orig: Whether the id and epoch of the originating server are recorded in the mysql.ndb_binlog_index table. Set using the --ndb-log-orig option when starting mysqld.

    ndb_log_orig:发起服务器的id和epoch是否记录在mysql.ndb_binlog_索引表中。启动mysqld时使用--ndb log orig选项进行设置。

  • ndb_log_transaction_id: Whether NDB transaction IDs are written into the binary log (Read-only.)

    ndb_log_transaction_id:是否将ndb事务id写入二进制日志(只读)。

  • ndb-log-update-minimal: Log updates in a minimal format.

    最小ndb日志更新:以最小格式更新日志。

  • ndb_log_updated_only: Log complete rows (ON) or updates only (OFF)

    ndb_logu_updated_only:记录完整行(开)或仅更新(关)

  • Ndb_number_of_data_nodes: If the server is part of an NDB Cluster, the value of this variable is the number of data nodes in the cluster

    ndb_data_节点数:如果服务器是ndb集群的一部分,则此变量的值是集群中的数据节点数

  • ndb-optimization-delay: Sets the number of milliseconds to wait between processing sets of rows by OPTIMIZE TABLE on NDB tables

    ndb optimization delay:设置ndb表上的optimization table在处理行集之间等待的毫秒数。

  • ndb_optimized_node_selection: Determines how an SQL node chooses a cluster data node to use as transaction coordinator

    ndb_optimized_node_selection:确定SQL节点如何选择要用作事务协调器的群集数据节点

  • Ndb_pruned_scan_count: Number of scans executed by NDB since the cluster was last started where partition pruning could be used

    ndb_pruned_scan_count:自上次启动群集后,ndb执行的扫描数,其中可以使用分区修剪

  • Ndb_pushed_queries_defined: Number of joins that API nodes have attempted to push down to the data nodes

    ndb_pushed_querys_defined:API节点尝试下推到数据节点的联接数

  • Ndb_pushed_queries_dropped: Number of joins that API nodes have tried to push down, but failed

    ndb_pushed_querys_dropped:API节点尝试向下推的连接数,但失败

  • Ndb_pushed_queries_executed: Number of joins successfully pushed down and executed on the data nodes

    ndb_pushed_querys_executed:成功下推并在数据节点上执行的连接数

  • Ndb_pushed_reads: Number of reads executed on the data nodes by pushed-down joins

    ndb_push_reads:通过下推连接在数据节点上执行的读取数

  • ndb_read_backup: Enable read from any replica

    ndb_read_backup:启用从任何副本读取

  • ndb_recv_thread_activation_threshold: Activation threshold when receive thread takes over the polling of the cluster connection (measured in concurrently active threads)

    ndb_recv_thread_activation_threshold:接收线程接管群集连接轮询时的激活阈值(在并发活动线程中测量)

  • ndb_recv_thread_cpu_mask: CPU mask for locking receiver threads to specific CPUs; specified as hexadecimal. See documentation for details.

    ndb_recv_thread_cpu_mask:用于将接收器线程锁定到特定cpu的cpu掩码;指定为十六进制。有关详细信息,请参见文档。

  • ndb_report_thresh_binlog_epoch_slip: NDB 7.5.4 and later: Threshold for number of epochs completely buffered, but not yet consumed by binlog injector thread which when exceeded generates BUFFERED_EPOCHS_OVER_THRESHOLD event buffer status message; prior to NDB 7.5.4: Threshold for number of epochs to lag behind before reporting binary log status

    ndb_report_thresh_binlog_epoch_slip:ndb 7.5.4及更高版本:已完全缓冲但尚未由binlog注入器线程使用的存储段数阈值,超过该阈值时,binlog注入器线程将生成缓冲存储段数超过阈值的事件缓冲区状态消息;在ndb 7.5.4之前:在报告二进制日志状态之前要滞后的存储段数阈值

  • ndb_report_thresh_binlog_mem_usage: This is a threshold on the percentage of free memory remaining before reporting binary log status

    ndb_report_thresh_binlog_mem_用法:这是报告二进制日志状态之前剩余可用内存百分比的阈值

  • ndb_row_checksum: When enabled, set row checksums; enabled by default

    ndb_row_checksum:启用时,设置行校验和;默认启用

  • Ndb_scan_count: The total number of scans executed by NDB since the cluster was last started

    ndb_scan_count:自群集上次启动以来ndb执行的扫描总数

  • ndb_show_foreign_key_mock_tables: Show the mock tables used to support foreign_key_checks=0

    ndb_show_foreign_key_mock_tables:显示用于支持foreign_key_checks=0的模拟表

  • ndb_slave_conflict_role: Role for slave to play in conflict detection and resolution. Value is one of PRIMARY, SECONDARY, PASS, or NONE (default). Can be changed only when slave SQL thread is stopped. See documentation for further information.

    ndb_slave_conflict_role:从机在冲突检测和解决中的作用。值是primary、secondary、pass或none(默认值)之一。只有在从SQL线程停止时才能更改。有关更多信息,请参阅文档。

  • Ndb_slave_max_replicated_epoch: The most recently committed NDB epoch on this slave. When this value is greater than or equal to Ndb_conflict_last_conflict_epoch, no conflicts have yet been detected.

    ndb_slave_max_replicated_epoch:此从机上最近提交的ndb epoch。当此值大于或等于ndb_conflict_last_conflict_epoch时,尚未检测到冲突。

  • Ndb_system_name: Configured cluster system name; empty if server not connected to NDB

    ndb_system_name:已配置的群集系统名称;如果服务器未连接到ndb,则为空

  • ndb_table_no_logging: NDB tables created when this setting is enabled are not checkpointed to disk (although table schema files are created). The setting in effect when the table is created with or altered to use NDBCLUSTER persists for the lifetime of the table.

    ndb_table_no_logging:启用此设置时创建的ndb表未检查到磁盘(尽管创建了表架构文件)。使用ndbcluster创建表或更改为使用ndbcluster时生效的设置在表的生命周期内保持有效。

  • ndb_table_temporary: NDB tables are not persistent on disk: no schema files are created and the tables are not logged

    ndb_table_temporary:磁盘上的ndb表不是持久的:没有创建架构文件,也没有记录表

  • ndb_use_exact_count: Use exact row count when planning queries

    ndb_use_exact_count:计划查询时使用精确行计数

  • ndb_use_transactions: Forces NDB to use a count of records during SELECT COUNT(*) query planning to speed up this type of query

    ndb_use_事务:强制ndb在select count(*)查询规划期间使用记录计数,以加速此类查询

  • ndb_version: Shows build and NDB engine version as an integer

    ndb_版本:将内部版本和ndb引擎版本显示为整数

  • ndb_version_string: Shows build information including NDB engine version in ndb-x.y.z format

    ndb_version_string:以ndb-x.y.z格式显示生成信息,包括ndb引擎版本

  • ndbcluster: Enable NDB Cluster (if this version of MySQL supports it) Disabled by --skip-ndbcluster

    ndb cluster:启用ndb集群(如果此版本的mysql支持它),禁用者--skip ndbcluster

  • ndbinfo_database: The name used for the NDB information database; read only

    ndbinfo_数据库:用于ndb信息数据库的名称;只读

  • ndbinfo_max_bytes: Used for debugging only

    ndbinfo_max_bytes:仅用于调试

  • ndbinfo_max_rows: Used for debugging only

    ndbinfo_max_行:仅用于调试

  • ndbinfo_offline: Put the ndbinfo database into offline mode, in which no rows are returned from tables or views

    ndbinfo_offline:将ndbinfo数据库置于脱机模式,在这种模式下,表或视图不返回行

  • ndbinfo_show_hidden: Whether to show ndbinfo internal base tables in the mysql client. The default is OFF.

    ndbinfo_show_hidden:是否在mysql客户端显示ndbinfo内部基表。默认设置为“关闭”。

  • ndbinfo_table_prefix: The prefix to use for naming ndbinfo internal base tables

    ndbinfo_table_prefix:用于命名ndbinfo内部基表的前缀

  • ndbinfo_version: The version of the ndbinfo engine; read only

    ndbinfo_版本:ndbinfo引擎的版本;只读

  • server-id-bits: Sets the number of least significant bits in the server_id actually used for identifying the server, permitting NDB API applications to store application data in the most significant bits. server_id must be less than 2 to the power of this value.

    服务器id位:设置服务器id中实际用于标识服务器的最低有效位的数量,允许ndb api应用程序将应用程序数据存储在最高有效位中。服务器ID必须小于此值的2倍。

  • slave_allow_batching: Turns update batching on and off for a replication slave

    slave_allow_batching:为复制从机打开和关闭更新批处理

  • transaction_allow_batching: Allows batching of statements within a transaction. Disable AUTOCOMMIT to use.

    事务允许批处理:允许对事务中的语句进行批处理。禁用自动提交以使用。

21.3.3 NDB Cluster Configuration Files

Configuring NDB Cluster requires working with two files:

配置ndb群集需要使用两个文件:

  • my.cnf: Specifies options for all NDB Cluster executables. This file, with which you should be familiar with from previous work with MySQL, must be accessible by each executable running in the cluster.

    my.cnf:指定所有ndb群集可执行文件的选项。这个文件,您应该熟悉以前使用mysql时使用的文件,它必须能够被集群中运行的每个可执行文件访问。

  • config.ini: This file, sometimes known as the global configuration file, is read only by the NDB Cluster management server, which then distributes the information contained therein to all processes participating in the cluster. config.ini contains a description of each node involved in the cluster. This includes configuration parameters for data nodes and configuration parameters for connections between all nodes in the cluster. For a quick reference to the sections that can appear in this file, and what sorts of configuration parameters may be placed in each section, see Sections of the config.ini File.

    config.ini:这个文件有时称为全局配置文件,由ndb集群管理服务器只读,然后将其中包含的信息分发给参与集群的所有进程。config.ini包含对集群中涉及的每个节点的描述。这包括数据节点的配置参数和群集中所有节点之间连接的配置参数。有关此文件中可能出现的节以及每个节中可能放置的配置参数类型的快速参考,请参阅config.ini文件的节。

Caching of configuration data.  NDB uses stateful configuration. Rather than reading the global configuration file every time the management server is restarted, the management server caches the configuration the first time it is started, and thereafter, the global configuration file is read only when one of the following conditions is true:

配置数据的缓存。ndb使用有状态配置。不是每次重新启动管理服务器时都读取全局配置文件,而是在第一次启动时缓存该配置,然后,当下列条件之一为真时,全局配置文件才是只读的:

  • The management server is started using the --initial option.  When --initial is used, the global configuration file is re-read, any existing cache files are deleted, and the management server creates a new configuration cache.

    管理服务器使用--initial选项启动。当使用初始值时,重新读取全局配置文件,删除任何现有的缓存文件,管理服务器创建新的配置缓存。

  • The management server is started using the --reload option.  The --reload option causes the management server to compare its cache with the global configuration file. If they differ, the management server creates a new configuration cache; any existing configuration cache is preserved, but not used. If the management server's cache and the global configuration file contain the same configuration data, then the existing cache is used, and no new cache is created.

    管理服务器使用--reload选项启动。--reload选项使管理服务器将其缓存与全局配置文件进行比较。如果它们不同,则管理服务器创建新的配置缓存;保存现有的配置缓存,但不使用。如果管理服务器的缓存和全局配置文件包含相同的配置数据,则使用现有的缓存,并且不创建新的缓存。

  • The management server is started using --config-cache=FALSE.  This disables --config-cache (enabled by default), and can be used to force the management server to bypass configuration caching altogether. In this case, the management server ignores any configuration files that may be present, always reading its configuration data from the config.ini file instead.

    管理服务器是使用--config cache=false启动的。这将禁用--config cache(默认情况下启用),并可用于强制管理服务器完全绕过配置缓存。在这种情况下,管理服务器忽略可能存在的任何配置文件,而总是从config.ini文件读取其配置数据。

  • No configuration cache is found.  In this case, the management server reads the global configuration file and creates a cache containing the same configuration data as found in the file.

    找不到配置缓存。在这种情况下,管理服务器读取全局配置文件并创建一个缓存,其中包含与该文件中相同的配置数据。

Configuration cache files.  The management server by default creates configuration cache files in a directory named mysql-cluster in the MySQL installation directory. (If you build NDB Cluster from source on a Unix system, the default location is /usr/local/mysql-cluster.) This can be overridden at runtime by starting the management server with the --configdir option. Configuration cache files are binary files named according to the pattern ndb_node_id_config.bin.seq_id, where node_id is the management server's node ID in the cluster, and seq_id is a cache idenitifer. Cache files are numbered sequentially using seq_id, in the order in which they are created. The management server uses the latest cache file as determined by the seq_id.

配置缓存文件。默认情况下,管理服务器在mysql安装目录中名为mysql cluster的目录中创建配置缓存文件。(如果在unix系统上从源代码构建ndb集群,则默认位置为/usr/local/mysql cluster。)可以在运行时通过使用--configdir选项启动管理服务器来覆盖它。配置缓存文件是根据模式ndb_node_id_config.bin.seq_id命名的二进制文件,其中node_id是集群中管理服务器的节点id,seq_id是缓存标识符。缓存文件按创建顺序使用seq_id按顺序编号。管理服务器使用由seq_id确定的最新缓存文件。

Note

It is possible to roll back to a previous configuration by deleting later configuration cache files, or by renaming an earlier cache file so that it has a higher seq_id. However, since configuration cache files are written in a binary format, you should not attempt to edit their contents by hand.

可以通过删除以后的配置缓存文件或重命名以前的缓存文件以使其具有更高的seq_id来回滚到以前的配置。但是,由于配置缓存文件是以二进制格式写入的,因此不应尝试手动编辑其内容。

For more information about the --configdir, --config-cache, --initial, and --reload options for the NDB Cluster management server, see Section 21.4.4, “ndb_mgmd — The NDB Cluster Management Server Daemon”.

有关ndb群集管理服务器的--configdir、--config cache、--initial和--reload选项的更多信息,请参阅21.4.4节,“ndb_mgmd-ndb群集管理服务器守护程序”。

We are continuously making improvements in Cluster configuration and attempting to simplify this process. Although we strive to maintain backward compatibility, there may be times when introduce an incompatible change. In such cases we will try to let Cluster users know in advance if a change is not backward compatible. If you find such a change and we have not documented it, please report it in the MySQL bugs database using the instructions given in Section 1.7, “How to Report Bugs or Problems”.

我们正在不断地改进集群配置,并试图简化这个过程。尽管我们努力保持向后兼容,但有时可能会引入不兼容的更改。在这种情况下,如果更改不向后兼容,我们将尝试让群集用户提前知道。如果您发现了这样的更改,而我们没有记录在案,请使用第1.7节“如何报告错误或问题”中给出的说明在mysql错误数据库中报告。

21.3.3.1 NDB Cluster Configuration: Basic Example

To support NDB Cluster, you will need to update my.cnf as shown in the following example. You may also specify these parameters on the command line when invoking the executables.

要支持ndb集群,您需要更新my.cnf,如下例所示。调用可执行文件时,也可以在命令行中指定这些参数。

Note

The options shown here should not be confused with those that are used in config.ini global configuration files. Global configuration options are discussed later in this section.

此处显示的选项不应与config.ini全局配置文件中使用的选项混淆。全局配置选项将在本节后面讨论。

# my.cnf
# example additions to my.cnf for NDB Cluster
# (valid in MySQL 5.7)

# enable ndbcluster storage engine, and provide connection string for
# management server host (default port is 1186)
[mysqld]
ndbcluster
ndb-connectstring=ndb_mgmd.mysql.com


# provide connection string for management server host (default port: 1186)
[ndbd]
connect-string=ndb_mgmd.mysql.com

# provide connection string for management server host (default port: 1186)
[ndb_mgm]
connect-string=ndb_mgmd.mysql.com

# provide location of cluster configuration file
[ndb_mgmd]
config-file=/etc/config.ini

(For more information on connection strings, see Section 21.3.3.3, “NDB Cluster Connection Strings”.)

(有关连接字符串的详细信息,请参阅21.3.3.3节,“ndb群集连接字符串”。)

# my.cnf
# example additions to my.cnf for NDB Cluster
# (will work on all versions)

# enable ndbcluster storage engine, and provide connection string for management
# server host to the default port 1186
[mysqld]
ndbcluster
ndb-connectstring=ndb_mgmd.mysql.com:1186
Important

Once you have started a mysqld process with the NDBCLUSTER and ndb-connectstring parameters in the [mysqld] in the my.cnf file as shown previously, you cannot execute any CREATE TABLE or ALTER TABLE statements without having actually started the cluster. Otherwise, these statements will fail with an error. This is by design.

如前所示,使用my.cnf文件中的[mysqld]中的ndb cluster和ndb connectstring参数启动mysqld进程后,如果没有实际启动集群,则无法执行任何create table或alter table语句。否则,这些语句将失败并出现错误。这是故意的。

You may also use a separate [mysql_cluster] section in the cluster my.cnf file for settings to be read and used by all executables:

您也可以在cluster my.cnf文件中使用单独的[mysql_cluster]部分来读取所有可执行文件要使用的设置:

# cluster-specific settings
[mysql_cluster]
ndb-connectstring=ndb_mgmd.mysql.com:1186

For additional NDB variables that can be set in the my.cnf file, see Section 21.3.3.9.2, “NDB Cluster System Variables”.

有关可以在my.cnf文件中设置的其他ndb变量,请参阅21.3.3.9.2节“ndb cluster system variables”。

The NDB Cluster global configuration file is by convention named config.ini (but this is not required). If needed, it is read by ndb_mgmd at startup and can be placed in any location that can be read by it. The location and name of the configuration are specified using --config-file=path_name with ndb_mgmd on the command line. This option has no default value, and is ignored if ndb_mgmd uses the configuration cache.

ndb集群全局配置文件按照名为config.ini的约定(但这不是必需的)。如果需要,ndb_mgmd会在启动时读取它,并将其放置在任何可以读取的位置。配置的位置和名称是在命令行上使用--config file=path_name和ndb_mgmd指定的。此选项没有默认值,如果ndb-mgmd使用配置缓存,则忽略此选项。

The global configuration file for NDB Cluster uses INI format, which consists of sections preceded by section headings (surrounded by square brackets), followed by the appropriate parameter names and values. One deviation from the standard INI format is that the parameter name and value can be separated by a colon (:) as well as the equal sign (=); however, the equal sign is preferred. Another deviation is that sections are not uniquely identified by section name. Instead, unique sections (such as two different nodes of the same type) are identified by a unique ID specified as a parameter within the section.

ndb集群的全局配置文件使用ini格式,该格式由前面有节标题(由方括号包围)的节组成,后面跟着适当的参数名和值。与标准ini格式的一个不同之处是,参数名和值可以用冒号(:)和等号(=)分隔;但是,最好使用等号。另一个偏差是节不是由节名称唯一标识的。相反,唯一的节(例如同一类型的两个不同节点)由指定为节内参数的唯一id标识。

Default values are defined for most parameters, and can also be specified in config.ini. To create a default value section, simply add the word default to the section name. For example, an [ndbd] section contains parameters that apply to a particular data node, whereas an [ndbd default] section contains parameters that apply to all data nodes. Suppose that all data nodes should use the same data memory size. To configure them all, create an [ndbd default] section that contains a DataMemory line to specify the data memory size.

默认值是为大多数参数定义的,也可以在config.ini中指定。要创建默认值节,只需将单词default添加到节名称。例如,[ndbd]部分包含应用于特定数据节点的参数,而[ndbd default]部分包含应用于所有数据节点的参数。假设所有数据节点都应该使用相同的数据内存大小。要全部配置它们,请创建一个[ndbd default]节,其中包含一个数据内存行,以指定数据内存大小。

If used, the [ndbd default] section must precede any [ndbd] sections in the configuration file. This is also true for default sections of any other type.

如果使用,[ndbd default]部分必须在配置文件中的任何[ndbd]部分之前。对于任何其他类型的默认节也是如此。

Note

In some older releases of NDB Cluster, there was no default value for NoOfReplicas, which always had to be specified explicitly in the [ndbd default] section. Although this parameter now has a default value of 2, which is the recommended setting in most common usage scenarios, it is still recommended practice to set this parameter explicitly.

在一些旧版本的ndb cluster中,noofreplicas没有默认值,必须在[ndbd default]部分显式指定。尽管这个参数现在有一个默认值2,这是在大多数常见的使用场景中推荐的设置,但是仍然建议您显式地设置这个参数。

The global configuration file must define the computers and nodes involved in the cluster and on which computers these nodes are located. An example of a simple configuration file for a cluster consisting of one management server, two data nodes and two MySQL servers is shown here:

全局配置文件必须定义群集中涉及的计算机和节点以及这些节点所在的计算机。一个由一个管理服务器、两个数据节点和两个mysql服务器组成的集群的简单配置文件示例如下:

# file "config.ini" - 2 data nodes and 2 SQL nodes
# This file is placed in the startup directory of ndb_mgmd (the
# management server)
# The first MySQL Server can be started from any host. The second
# can be started only on the host mysqld_5.mysql.com

[ndbd default]
NoOfReplicas= 2
DataDir= /var/lib/mysql-cluster

[ndb_mgmd]
Hostname= ndb_mgmd.mysql.com
DataDir= /var/lib/mysql-cluster

[ndbd]
HostName= ndbd_2.mysql.com

[ndbd]
HostName= ndbd_3.mysql.com

[mysqld]
[mysqld]
HostName= mysqld_5.mysql.com
Note

The preceding example is intended as a minimal starting configuration for purposes of familiarization with NDB Cluster , and is almost certain not to be sufficient for production settings. See Section 21.3.3.2, “Recommended Starting Configuration for NDB Cluster”, which provides a more complete example starting configuration.

为了熟悉ndb集群,前面的示例是一个最小的启动配置,并且几乎可以确定不足以进行生产设置。请参阅21.3.3.2节“建议的ndb集群启动配置”,其中提供了更完整的启动配置示例。

Each node has its own section in the config.ini file. For example, this cluster has two data nodes, so the preceding configuration file contains two [ndbd] sections defining these nodes.

每个节点在config.ini文件中都有自己的节。例如,这个集群有两个数据节点,所以前面的配置文件包含两个定义这些节点的[ndbd]部分。

Note

Do not place comments on the same line as a section heading in the config.ini file; this causes the management server not to start because it cannot parse the configuration file in such cases.

不要将注释与config.ini文件中的节标题放在同一行;这会导致管理服务器无法启动,因为在这种情况下它无法分析配置文件。

Sections of the config.ini File

There are six different sections that you can use in the config.ini configuration file, as described in the following list:

可以在config.ini配置文件中使用六个不同的部分,如下表所示:

  • [computer]: Defines cluster hosts. This is not required to configure a viable NDB Cluster, but be may used as a convenience when setting up a large cluster. See Section 21.3.3.4, “Defining Computers in an NDB Cluster”, for more information.

    [计算机]:定义群集主机。这不需要配置一个可行的ndb集群,但可以在设置大型集群时用作便利。有关详细信息,请参阅第21.3.3.4节“在ndb集群中定义计算机”。

  • [ndbd]: Defines a cluster data node (ndbd process). See Section 21.3.3.6, “Defining NDB Cluster Data Nodes”, for details.

    [ndbd]:定义集群数据节点(ndbd进程)。详见21.3.3.6节“定义ndb集群数据节点”。

  • [mysqld]: Defines the cluster's MySQL server nodes (also called SQL or API nodes). For a discussion of SQL node configuration, see Section 21.3.3.7, “Defining SQL and Other API Nodes in an NDB Cluster”.

    [mysqld]:定义集群的mysql服务器节点(也称为sql或api节点)。有关SQL节点配置的讨论,请参阅第21.3.3.7节“在NDB集群中定义SQL和其他API节点”。

  • [mgm] or [ndb_mgmd]: Defines a cluster management server (MGM) node. For information concerning the configuration of management nodes, see Section 21.3.3.5, “Defining an NDB Cluster Management Server”.

    [mgm]或[ndb_mgmd]:定义群集管理服务器(mgm)节点。有关管理节点配置的信息,请参阅第21.3.3.5节“定义ndb群集管理服务器”。

  • [tcp]: Defines a TCP/IP connection between cluster nodes, with TCP/IP being the default connection protocol. Normally, [tcp] or [tcp default] sections are not required to set up an NDB Cluster, as the cluster handles this automatically; however, it may be necessary in some situations to override the defaults provided by the cluster. See Section 21.3.3.10, “NDB Cluster TCP/IP Connections”, for information about available TCP/IP configuration parameters and how to use them. (You may also find Section 21.3.3.11, “NDB Cluster TCP/IP Connections Using Direct Connections” to be of interest in some cases.)

    [TCP]:定义群集节点之间的TCP/IP连接,默认连接协议为TCP/IP。通常,设置ndb集群不需要[tcp]或[tcp default]部分,因为集群会自动处理此问题;但是,在某些情况下,可能需要覆盖集群提供的默认值。有关可用的TCP/IP配置参数以及如何使用这些参数的信息,请参阅第21.3.3.10节“NDB群集TCP/IP连接”。(在某些情况下,您还可能会感兴趣地发现第21.3.3.11节“使用直接连接的ndb集群tcp/ip连接”。)

  • [shm]: Defines shared-memory connections between nodes. In MySQL 5.7, it is enabled by default, but should still be considered experimental. For a discussion of SHM interconnects, see Section 21.3.3.12, “NDB Cluster Shared Memory Connections”.

    [shm]:定义节点之间的共享内存连接。在mysql 5.7中,它在默认情况下是启用的,但仍应被视为实验性的。有关SHM互连的讨论,请参阅第21.3.3.12节“NDB群集共享内存连接”。

  • [sci]: Defines Scalable Coherent Interface connections between cluster data nodes. Not supported in NDB 7.5 or 7.6.

    [SCI]:定义集群数据节点之间可伸缩的一致接口连接。在ndb 7.5或7.6中不支持。

You can define default values for each section. If used, a default section should come before any other sections of that type. For example, an [ndbd default] section should appear in the configuration file before any [ndbd] sections.

可以为每个部分定义默认值。如果使用,则默认节应位于该类型的任何其他节之前。例如,[ndbd default]节应该出现在配置文件中任何[ndbd]节之前。

NDB Cluster parameter names are case-insensitive, unless specified in MySQL Server my.cnf or my.ini files.

除非在mysql server my.cnf或my.ini文件中指定,否则ndb群集参数名称不区分大小写。

21.3.3.2 Recommended Starting Configuration for NDB Cluster

Achieving the best performance from an NDB Cluster depends on a number of factors including the following:

从ndb集群获得最佳性能取决于以下几个因素:

  • NDB Cluster software version

    ndb群集软件版本

  • Numbers of data nodes and SQL nodes

    数据节点和sql节点的数量

  • Hardware

    硬件

  • Operating system

    操作系统

  • Amount of data to be stored

    要存储的数据量

  • Size and type of load under which the cluster is to operate

    集群运行时的负载大小和类型

Therefore, obtaining an optimum configuration is likely to be an iterative process, the outcome of which can vary widely with the specifics of each NDB Cluster deployment. Changes in configuration are also likely to be indicated when changes are made in the platform on which the cluster is run, or in applications that use the NDB Cluster 's data. For these reasons, it is not possible to offer a single configuration that is ideal for all usage scenarios. However, in this section, we provide a recommended base configuration.

因此,获得最佳配置可能是一个迭代过程,其结果可能因每个ndb集群部署的具体情况而大不相同。在运行群集的平台或使用ndb群集数据的应用程序中进行更改时,也可能会指示配置中的更改。由于这些原因,不可能提供一个适合所有使用场景的单一配置。但是,在本节中,我们提供了建议的基本配置。

Starting config.ini file.  The following config.ini file is a recommended starting point for configuring a cluster running NDB Cluster 7.5:

正在启动config.ini文件。以下config.ini文件是配置运行ndb cluster 7.5的群集的建议起点:

# TCP PARAMETERS

[tcp default]
SendBufferMemory=2M
ReceiveBufferMemory=2M

# Increasing the sizes of these 2 buffers beyond the default values
# helps prevent bottlenecks due to slow disk I/O.

# MANAGEMENT NODE PARAMETERS

[ndb_mgmd default]
DataDir=path/to/management/server/data/directory

# It is possible to use a different data directory for each management
# server, but for ease of administration it is preferable to be
# consistent.

[ndb_mgmd]
HostName=management-server-A-hostname
# NodeId=management-server-A-nodeid

[ndb_mgmd]
HostName=management-server-B-hostname
# NodeId=management-server-B-nodeid

# Using 2 management servers helps guarantee that there is always an
# arbitrator in the event of network partitioning, and so is
# recommended for high availability. Each management server must be
# identified by a HostName. You may for the sake of convenience specify
# a NodeId for any management server, although one will be allocated
# for it automatically; if you do so, it must be in the range 1-255
# inclusive and must be unique among all IDs specified for cluster
# nodes.

# DATA NODE PARAMETERS

[ndbd default]
NoOfReplicas=2

# Using 2 replicas is recommended to guarantee availability of data;
# using only 1 replica does not provide any redundancy, which means
# that the failure of a single data node causes the entire cluster to
# shut down. We do not recommend using more than 2 replicas, since 2 is
# sufficient to provide high availability, and we do not currently test
# with greater values for this parameter.

LockPagesInMainMemory=1

# On Linux and Solaris systems, setting this parameter locks data node
# processes into memory. Doing so prevents them from swapping to disk,
# which can severely degrade cluster performance.

DataMemory=3072M
IndexMemory=384M

# The values provided for DataMemory and IndexMemory assume 4 GB RAM
# per data node. However, for best results, you should first calculate
# the memory that would be used based on the data you actually plan to
# store (you may find the ndb_size.pl utility helpful in estimating
# this), then allow an extra 20% over the calculated values. Naturally,
# you should ensure that each data node host has at least as much
# physical memory as the sum of these two values.
# NOTE: IndexMemory is deprecated in NDB 7.6 and later.

# ODirect=1

# Enabling this parameter causes NDBCLUSTER to try using O_DIRECT
# writes for local checkpoints and redo logs; this can reduce load on
# CPUs. We recommend doing so when using NDB Cluster on systems running
# Linux kernel 2.6 or later.

NoOfFragmentLogFiles=300
DataDir=path/to/data/node/data/directory
MaxNoOfConcurrentOperations=100000

SchedulerSpinTimer=400
SchedulerExecutionTimer=100
RealTimeScheduler=1
# Setting these parameters allows you to take advantage of real-time scheduling
# of NDB threads to achieve increased throughput when using ndbd. They
# are not needed when using ndbmtd; in particular, you should not set
# RealTimeScheduler for ndbmtd data nodes.

TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=200
RedoBuffer=32M

# CompressedLCP=1
# CompressedBackup=1
# Enabling CompressedLCP and CompressedBackup causes, respectively, local
checkpoint files and backup files to be compressed, which can result in a space
savings of up to 50% over noncompressed LCPs and backups.

# MaxNoOfLocalScans=64
MaxNoOfTables=1024
MaxNoOfOrderedIndexes=256

[ndbd]
HostName=data-node-A-hostname
# NodeId=data-node-A-nodeid

LockExecuteThreadToCPU=1
LockMaintThreadsToCPU=0
# On systems with multiple CPUs, these parameters can be used to lock NDBCLUSTER
# threads to specific CPUs

[ndbd]
HostName=data-node-B-hostname
# NodeId=data-node-B-nodeid

LockExecuteThreadToCPU=1
LockMaintThreadsToCPU=0

# You must have an [ndbd] section for every data node in the cluster;
# each of these sections must include a HostName. Each section may
# optionally include a NodeId for convenience, but in most cases, it is
# sufficient to allow the cluster to allocate node IDs dynamically. If
# you do specify the node ID for a data node, it must be in the range 1
# to 48 inclusive and must be unique among all IDs specified for
# cluster nodes.

# SQL NODE / API NODE PARAMETERS

[mysqld]
# HostName=sql-node-A-hostname
# NodeId=sql-node-A-nodeid

[mysqld]

[mysqld]

# Each API or SQL node that connects to the cluster requires a [mysqld]
# or [api] section of its own. Each such section defines a connection
# slot; you should have at least as many of these sections in the
# config.ini file as the total number of API nodes and SQL nodes that
# you wish to have connected to the cluster at any given time. There is
# no performance or other penalty for having extra slots available in
# case you find later that you want or need more API or SQL nodes to
# connect to the cluster at the same time.
# If no HostName is specified for a given [mysqld] or [api] section,
# then any API or SQL node may use that slot to connect to the
# cluster. You may wish to use an explicit HostName for one connection slot
# to guarantee that an API or SQL node from that host can always
# connect to the cluster. If you wish to prevent API or SQL nodes from
# connecting from other than a desired host or hosts, then use a
# HostName for every [mysqld] or [api] section in the config.ini file.
# You can if you wish define a node ID (NodeId parameter) for any API or
# SQL node, but this is not necessary; if you do so, it must be in the
# range 1 to 255 inclusive and must be unique among all IDs specified
# for cluster nodes.

Recommended my.cnf options for SQL nodes.  MySQL Servers acting as NDB Cluster SQL nodes must always be started with the --ndbcluster and --ndb-connectstring options, either on the command line or in my.cnf. In addition, set the following options for all mysqld processes in the cluster, unless your setup requires otherwise:

为SQL节点推荐my.cnf选项。充当ndb cluster sql节点的mysql服务器必须始终在命令行或my.cnf中使用--ndbcluster和--ndb connectstring选项启动。此外,请为群集中的所有mysqld进程设置以下选项,除非安装程序另有要求:

  • --ndb-use-exact-count=0

    --ndb使用精确计数=0

  • --ndb-index-stat-enable=0

    --ndb index stat enable=0

  • --ndb-force-send=1

    --ndb force send=1

  • --optimizer-switch=engine_condition_pushdown=on

    --优化器开关=发动机状况下推=打开

21.3.3.3 NDB Cluster Connection Strings

With the exception of the NDB Cluster management server (ndb_mgmd), each node that is part of an NDB Cluster requires a connection string that points to the management server's location. This connection string is used in establishing a connection to the management server as well as in performing other tasks depending on the node's role in the cluster. The syntax for a connection string is as follows:

除了ndb cluster management server(ndb_mgmd)之外,属于ndb集群的每个节点都需要一个指向管理服务器位置的连接字符串。此连接字符串用于建立与管理服务器的连接,以及根据节点在群集中的角色执行其他任务。连接字符串的语法如下:

[nodeid=node_id, ]host-definition[, host-definition[, ...]]

host-definition:
    host_name[:port_number]

node_id is an integer greater than or equal to 1 which identifies a node in config.ini. host_name is a string representing a valid Internet host name or IP address. port_number is an integer referring to a TCP/IP port number.

node_id是一个大于或等于1的整数,用于标识config.ini中的节点。主机名是表示有效Internet主机名或IP地址的字符串。端口号是指TCP/IP端口号的整数。

example 1 (long):    "nodeid=2,myhost1:1100,myhost2:1100,198.51.100.3:1200"
example 2 (short):   "myhost1"

localhost:1186 is used as the default connection string value if none is provided. If port_num is omitted from the connection string, the default port is 1186. This port should always be available on the network because it has been assigned by IANA for this purpose (see http://www.iana.org/assignments/port-numbers for details).

localhost:1186用作默认连接字符串值(如果未提供)。如果从连接字符串中省略port_num,则默认端口为1186。此端口应始终在网络上可用,因为它已由IANA为此目的分配(有关详细信息,请参阅http://www.iana.org/assignments/port-numbers)。

By listing multiple host definitions, it is possible to designate several redundant management servers. An NDB Cluster data or API node attempts to contact successive management servers on each host in the order specified, until a successful connection has been established.

通过列出多个主机定义,可以指定多个冗余管理服务器。ndb集群数据或api节点尝试按指定的顺序联系每个主机上的连续管理服务器,直到建立成功的连接。

It is also possible to specify in a connection string one or more bind addresses to be used by nodes having multiple network interfaces for connecting to management servers. A bind address consists of a hostname or network address and an optional port number. This enhanced syntax for connection strings is shown here:

也可以在连接字符串中指定一个或多个绑定地址,供具有多个网络接口的节点用于连接到管理服务器。绑定地址由主机名或网络地址和可选端口号组成。连接字符串的增强语法如下所示:

[nodeid=node_id, ]
    [bind-address=host-definition, ]
    host-definition[; bind-address=host-definition]
    host-definition[; bind-address=host-definition]
    [, ...]]

host-definition:
    host_name[:port_number]

If a single bind address is used in the connection string prior to specifying any management hosts, then this address is used as the default for connecting to any of them (unless overridden for a given management server; see later in this section for an example). For example, the following connection string causes the node to use 198.51.100.242 regardless of the management server to which it connects:

如果在指定任何管理主机之前在连接字符串中使用了单个绑定地址,则此地址将用作连接到其中任何主机的默认地址(除非对给定管理服务器重写;有关示例,请参阅本节后面的内容)。例如,以下连接字符串导致节点使用198.51.100.242,而不管它连接到的管理服务器是什么:

bind-address=198.51.100.242, poseidon:1186, perch:1186

If a bind address is specified following a management host definition, then it is used only for connecting to that management node. Consider the following connection string:

如果在管理主机定义之后指定绑定地址,则该地址仅用于连接到该管理节点。请考虑以下连接字符串:

poseidon:1186;bind-address=localhost, perch:1186;bind-address=198.51.100.242

In this case, the node uses localhost to connect to the management server running on the host named poseidon and 198.51.100.242 to connect to the management server running on the host named perch.

在本例中,节点使用localhost连接到在名为poseidon的主机上运行的管理服务器,并使用198.51.100.242连接到在名为perch的主机上运行的管理服务器。

You can specify a default bind address and then override this default for one or more specific management hosts. In the following example, localhost is used for connecting to the management server running on host poseidon; since 198.51.100.242 is specified first (before any management server definitions), it is the default bind address and so is used for connecting to the management servers on hosts perch and orca:

您可以指定一个默认绑定地址,然后覆盖一个或多个特定管理主机的此默认绑定地址。在以下示例中,localhost用于连接到在主机poseidon上运行的管理服务器;由于首先指定了198.51.100.242(在任何管理服务器定义之前),因此它是默认绑定地址,因此用于连接到主机perch和orca上的管理服务器:

bind-address=198.51.100.242,poseidon:1186;bind-address=localhost,perch:1186,orca:2200

There are a number of different ways to specify the connection string:

有多种不同的方法可以指定连接字符串:

  • Each executable has its own command-line option which enables specifying the management server at startup. (See the documentation for the respective executable.)

    每个可执行文件都有自己的命令行选项,可以在启动时指定管理服务器。(请参阅相应可执行文件的文档。)

  • It is also possible to set the connection string for all nodes in the cluster at once by placing it in a [mysql_cluster] section in the management server's my.cnf file.

    还可以通过将连接字符串放在管理服务器的my.cnf文件的[mysql_cluster]部分中,来同时为群集中的所有节点设置连接字符串。

  • For backward compatibility, two other options are available, using the same syntax:

    为了向后兼容,可以使用相同的语法使用其他两个选项:

    1. Set the NDB_CONNECTSTRING environment variable to contain the connection string.

      将ndb_connectstring环境变量设置为包含连接字符串。

    2. Write the connection string for each executable into a text file named Ndb.cfg and place this file in the executable's startup directory.

      将每个可执行文件的连接字符串写入名为ndb.cfg的文本文件,并将此文件放置在可执行文件的启动目录中。

    However, these are now deprecated and should not be used for new installations.

    但是,现在不推荐使用这些工具,不应用于新的安装。

The recommended method for specifying the connection string is to set it on the command line or in the my.cnf file for each executable.

指定连接字符串的推荐方法是在命令行或my.cnf文件中为每个可执行文件设置它。

21.3.3.4 Defining Computers in an NDB Cluster

The [computer] section has no real significance other than serving as a way to avoid the need of defining host names for each node in the system. All parameters mentioned here are required.

[计算机]部分除了用作避免为系统中的每个节点定义主机名的方法之外,没有任何实际意义。这里提到的所有参数都是必需的。

Restart types.  Information about the restart types used by the parameter descriptions in this section is shown in the following table:

重新启动类型。下表显示了有关本节中参数说明使用的重新启动类型的信息:

Table 21.7 NDB Cluster restart types

表21.7 ndb集群重启类型

Symbol Restart Type Description
N Node The parameter can be updated using a rolling restart (see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”)
S System All cluster nodes must be shut down completely, then restarted, to effect a change in this parameter
I Initial Data nodes must be restarted using the --initial option

  • Id

    身份证件

    Table 21.8 This table provides type and value information for the Id computer configuration parameter

    表21.8此表提供ID计算机配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units string
    Default [none]
    Range ...
    Restart Type IS

    This is a unique identifier, used to refer to the host computer elsewhere in the configuration file.

    这是一个唯一的标识符,用于引用配置文件中其他位置的主机。

    Important

    The computer ID is not the same as the node ID used for a management, API, or data node. Unlike the case with node IDs, you cannot use NodeId in place of Id in the [computer] section of the config.ini file.

    计算机ID与用于管理、API或数据节点的节点ID不同。与节点ID不同,在config.ini文件的[computer]部分中不能使用node id代替id。

  • HostName

    主机名

    Table 21.9 This table provides type and value information for the HostName computer configuration parameter

    表21.9此表提供主机名计算机配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name or IP address
    Default [none]
    Range ...
    Restart Type N

    This is the computer's hostname or IP address.

    这是计算机的主机名或IP地址。

21.3.3.5 Defining an NDB Cluster Management Server

The [ndb_mgmd] section is used to configure the behavior of the management server. If multiple management servers are employed, you can specify parameters common to all of them in an [ndb_mgmd default] section. [mgm] and [mgm default] are older aliases for these, supported for backward compatibility.

[ndb_mgmd]部分用于配置管理服务器的行为。如果使用了多个管理服务器,则可以在[ndb_mgmd default]部分中指定所有服务器通用的参数。[mgm]和[mgm default]是它们的旧别名,支持向后兼容性。

All parameters in the following list are optional and assume their default values if omitted.

以下列表中的所有参数都是可选的,如果省略,则采用默认值。

Note

If neither the ExecuteOnComputer nor the HostName parameter is present, the default value localhost will be assumed for both.

如果executeoncomputer和hostname参数都不存在,则将为这两个参数假定默认值localhost。

Restart types.  Information about the restart types used by the parameter descriptions in this section is shown in the following table:

重新启动类型。下表显示了有关本节中参数说明使用的重新启动类型的信息:

Table 21.10 NDB Cluster restart types

表21.10 ndb集群重启类型

Symbol Restart Type Description
N Node The parameter can be updated using a rolling restart (see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”)
S System All cluster nodes must be shut down completely, then restarted, to effect a change in this parameter
I Initial Data nodes must be restarted using the --initial option

  • Id

    身份证件

    Table 21.11 This table provides type and value information for the Id management node configuration parameter

    表21.11此表提供ID管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default [none]
    Range 1 - 255
    Restart Type IS

    Each node in the cluster has a unique identity. For a management node, this is represented by an integer value in the range 1 to 255, inclusive. This ID is used by all internal cluster messages for addressing the node, and so must be unique for each NDB Cluster node, regardless of the type of node.

    群集中的每个节点都有唯一的标识。对于管理节点,这由1到255(含)范围内的整数值表示。此ID由所有内部群集消息用于寻址节点,因此对于每个ndb群集节点都必须是唯一的,而不管节点的类型如何。

    Note

    Data node IDs must be less than 49. If you plan to deploy a large number of data nodes, it is a good idea to limit the node IDs for management nodes (and API nodes) to values greater than 48.

    数据节点ID必须小于49。如果计划部署大量数据节点,最好将管理节点(和api节点)的节点id限制为大于48的值。

    The use of the Id parameter for identifying management nodes is deprecated in favor of NodeId. Although Id continues to be supported for backward compatibility, it now generates a warning and is subject to removal in a future version of NDB Cluster.

    不赞成使用id参数来标识管理节点,而赞成使用nodeid。尽管id继续支持向后兼容性,但它现在会生成一个警告,并可能在ndb集群的未来版本中被删除。

  • NodeId

    节点ID

    Table 21.12 This table provides type and value information for the NodeId management node configuration parameter

    表21.12此表提供nodeid管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default [none]
    Range 1 - 255
    Restart Type IS

    Each node in the cluster has a unique identity. For a management node, this is represented by an integer value in the range 1 to 255 inclusive. This ID is used by all internal cluster messages for addressing the node, and so must be unique for each NDB Cluster node, regardless of the type of node.

    群集中的每个节点都有唯一的标识。对于管理节点,这由1到255(含)范围内的整数值表示。此ID由所有内部群集消息用于寻址节点,因此对于每个ndb群集节点都必须是唯一的,而不管节点的类型如何。

    Note

    Data node IDs must be less than 49. If you plan to deploy a large number of data nodes, it is a good idea to limit the node IDs for management nodes (and API nodes) to values greater than 48.

    数据节点ID必须小于49。如果计划部署大量数据节点,最好将管理节点(和api节点)的节点id限制为大于48的值。

    NodeId is the preferred parameter name to use when identifying management nodes. Although the older Id continues to be supported for backward compatibility, it is now deprecated and generates a warning when used; it is also subject to removal in a future NDB Cluster release.

    nodeid是标识管理节点时使用的首选参数名。尽管旧的id继续支持向后兼容性,但现在已弃用,使用时会生成警告;在将来的ndb群集版本中,它也可能会被删除。

  • ExecuteOnComputer

    执行计算机

    Table 21.13 This table provides type and value information for the ExecuteOnComputer management node configuration parameter

    表21.13此表提供ExecuteOnComputer管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name
    Default [none]
    Range ...
    Restart Type S

    This refers to the Id set for one of the computers defined in a [computer] section of the config.ini file.

    这是指在config.ini文件的[computer]部分中定义的一台计算机的ID集。

    Important

    This parameter is deprecated as of NDB 7.5.0, and is subject to removal in a future release. Use the HostName parameter instead.

    自ndb 7.5.0起,此参数已被弃用,并将在以后的版本中删除。请改用hostname参数。

  • PortNumber

    端口号

    Table 21.14 This table provides type and value information for the PortNumber management node configuration parameter

    表21.14此表提供端口号管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 1186
    Range 0 - 64K
    Restart Type S

    This is the port number on which the management server listens for configuration requests and management commands.

    这是管理服务器侦听配置请求和管理命令的端口号。

  • HostName

    主机名

    Table 21.15 This table provides type and value information for the HostName management node configuration parameter

    表21.15此表提供主机名管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name or IP address
    Default [none]
    Range ...
    Restart Type N

    Specifying this parameter defines the hostname of the computer on which the management node is to reside. To specify a hostname other than localhost, either this parameter or ExecuteOnComputer is required.

    指定此参数定义管理节点所在计算机的主机名。要指定localhost以外的主机名,需要此参数或executeoncomputer。

  • LocationDomainId

    位置域ID

    Table 21.16 This table provides type and value information for the LocationDomainId management node configuration parameter

    表21.16此表提供locationdomainid管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.4
    Type or units integer
    Default 0
    Range 0 - 16
    Restart Type S

    Assigns a management node to a specific availability domain (also known as an availability zone) within a cloud. By informing NDB which nodes are in which availability domains, performance can be improved in a cloud environment in the following ways:

    将管理节点分配给云中的特定可用性域(也称为可用性区域)。通过通知ndb哪些节点位于哪些可用性域中,可以通过以下方式提高云环境中的性能:

    • If requested data is not found on the same node, reads can be directed to another node in the same availability domain.

      如果在同一个节点上找不到请求的数据,则可以将读取定向到同一可用性域中的另一个节点。

    • Communication between nodes in different availability domains are guaranteed to use NDB transporters' WAN support without any further manual intervention.

      不同可用域中的节点之间的通信保证使用ndb transporters的广域网支持,而无需任何进一步的手动干预。

    • The transporter's group number can be based on which availability domain is used, such that also SQL and other API nodes communicate with local data nodes in the same availability domain whenever possible.

      传输程序的组号可以基于使用的可用性域,以便SQL和其他API节点尽可能与同一可用性域中的本地数据节点通信。

    • The arbitrator can be selected from an availability domain in which no data nodes are present, or, if no such availability domain can be found, from a third availability domain.

      仲裁器可以从不存在数据节点的可用性域中选择,或者,如果找不到这样的可用性域,则可以从第三个可用性域中选择。

    LocationDomainId takes an integer value between 0 and 16 inclusive, with 0 being the default; using 0 is the same as leaving the parameter unset.

    locationdomainid接受一个介于0和16之间(包括0和16)的整数值,默认值为0;使用0与不设置参数相同。

  • LogDestination

    日志目的地

    Table 21.17 This table provides type and value information for the LogDestination management node configuration parameter

    表21.17此表提供logdestination管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units {CONSOLE|SYSLOG|FILE}
    Default [see text]
    Range ...
    Restart Type N

    This parameter specifies where to send cluster logging information. There are three options in this regard—CONSOLE, SYSLOG, and FILE—with FILE being the default:

    此参数指定在何处发送群集日志记录信息。在这方面,控制台、syslog和file有三个选项,默认为file:

    • CONSOLE outputs the log to stdout:

      控制台将日志输出到标准输出:

      CONSOLE
      
    • SYSLOG sends the log to a syslog facility, possible values being one of auth, authpriv, cron, daemon, ftp, kern, lpr, mail, news, syslog, user, uucp, local0, local1, local2, local3, local4, local5, local6, or local7.

      syslog将日志发送到syslog工具,可能的值为auth、authpriv、cron、daemon、ftp、kern、lpr、mail、news、syslog、user、uucp、local0、local1、local2、local3、local4、local5、local6或local7。

      Note

      Not every facility is necessarily supported by every operating system.

      并非所有的设备都必须由每个操作系统支持。

      SYSLOG:facility=syslog
      
    • FILE pipes the cluster log output to a regular file on the same machine. The following values can be specified:

      文件将群集日志输出管道到同一台计算机上的常规文件。可以指定以下值:

      • filename: The name of the log file.

        文件名:日志文件的名称。

        The default log file name used in such cases is ndb_nodeid_cluster.log.

        在这种情况下使用的默认日志文件名是ndb_nodeid_cluster.log。

      • maxsize: The maximum size (in bytes) to which the file can grow before logging rolls over to a new file. When this occurs, the old log file is renamed by appending .N to the file name, where N is the next number not yet used with this name.

        Max Simult:在文件滚动到新文件之前文件可以增长的最大大小(以字节为单位)。发生这种情况时,旧的日志文件将通过在文件名后附加.n来重命名,其中n是尚未与此名称一起使用的下一个数字。

      • maxfiles: The maximum number of log files.

        Max文件:日志文件的最大数量。

      FILE:filename=cluster.log,maxsize=1000000,maxfiles=6
      

      The default value for the FILE parameter is FILE:filename=ndb_node_id_cluster.log,maxsize=1000000,maxfiles=6, where node_id is the ID of the node.

      file参数的默认值是file:filename=ndb_node_id_cluster.log,maxsize=1000000,maxfiles=6,其中node_id是节点的id。

    It is possible to specify multiple log destinations separated by semicolons as shown here:

    可以指定由分号分隔的多个日志目标,如下所示:

    CONSOLE;SYSLOG:facility=local0;FILE:filename=/var/log/mgmd
    
  • ArbitrationRank

    仲裁等级

    Table 21.18 This table provides type and value information for the ArbitrationRank management node configuration parameter

    表21.18此表提供仲裁等级管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units 0-2
    Default 1
    Range 0 - 2
    Restart Type N

    This parameter is used to define which nodes can act as arbitrators. Only management nodes and SQL nodes can be arbitrators. ArbitrationRank can take one of the following values:

    此参数用于定义哪些节点可以充当仲裁器。只有管理节点和sql节点可以是仲裁器。仲裁等级可以采用以下值之一:

    • 0: The node will never be used as an arbitrator.

      0:节点永远不会用作仲裁器。

    • 1: The node has high priority; that is, it will be preferred as an arbitrator over low-priority nodes.

      1:该节点具有高优先级,也就是说,相对于低优先级的节点,它将优先作为仲裁器。

    • 2: Indicates a low-priority node which be used as an arbitrator only if a node with a higher priority is not available for that purpose.

      2:表示低优先级节点,仅当具有较高优先级的节点不可用于此目的时,才将其用作仲裁器。

    Normally, the management server should be configured as an arbitrator by setting its ArbitrationRank to 1 (the default for management nodes) and those for all SQL nodes to 0 (the default for SQL nodes).

    通常,管理服务器应配置为仲裁器,方法是将其仲裁秩设置为1(管理节点的默认值),将所有SQL节点的仲裁秩设置为0(SQL节点的默认值)。

    You can disable arbitration completely either by setting ArbitrationRank to 0 on all management and SQL nodes, or by setting the Arbitration parameter in the [ndbd default] section of the config.ini global configuration file. Setting Arbitration causes any settings for ArbitrationRank to be disregarded.

    您可以通过在所有管理和SQL节点上将仲裁等级设置为0,或者通过在config.ini全局配置文件的[ndbd default]部分中设置仲裁参数,完全禁用仲裁。设置仲裁将忽略仲裁等级的任何设置。

  • ArbitrationDelay

    仲裁延迟

    Table 21.19 This table provides type and value information for the ArbitrationDelay management node configuration parameter

    表21.19此表提供仲裁延迟管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    An integer value which causes the management server's responses to arbitration requests to be delayed by that number of milliseconds. By default, this value is 0; it is normally not necessary to change it.

    使管理服务器对仲裁请求的响应延迟该毫秒数的整数值。默认情况下,此值为0;通常不需要更改它。

  • DataDir

    数据目录

    Table 21.20 This table provides type and value information for the DataDir management node configuration parameter

    表21.20此表提供datadir管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units path
    Default .
    Range ...
    Restart Type N

    This specifies the directory where output files from the management server will be placed. These files include cluster log files, process output files, and the daemon's process ID (PID) file. (For log files, this location can be overridden by setting the FILE parameter for LogDestination as discussed previously in this section.)

    它指定管理服务器输出文件的放置目录。这些文件包括集群日志文件、进程输出文件和守护进程的进程id(pid)文件。(对于日志文件,可以通过设置logdestination的file参数覆盖此位置,如本节前面所述。)

    The default value for this parameter is the directory in which ndb_mgmd is located.

    此参数的默认值是ndb_mgmd所在的目录。

  • PortNumberStats

    端口号状态

    Table 21.21 This table provides type and value information for the PortNumberStats management node configuration parameter

    表21.21提供portnumberstats管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default [none]
    Range 0 - 64K
    Restart Type N

    This parameter specifies the port number used to obtain statistical information from an NDB Cluster management server. It has no default value.

    此参数指定用于从ndb群集管理服务器获取统计信息的端口号。它没有默认值。

  • Wan

    广域网

    Table 21.22 This table provides type and value information for the wan management node configuration parameter

    表21.22此表提供WAN管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default false
    Range true, false
    Restart Type N

    Use WAN TCP setting as default.

    使用WAN TCP设置作为默认设置。

  • HeartbeatThreadPriority

    心跳线程优先级

    Table 21.23 This table provides type and value information for the HeartbeatThreadPriority management node configuration parameter

    表21.23此表提供HeartBeatThreadPriority Management节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units string
    Default [none]
    Range ...
    Restart Type S

    Set the scheduling policy and priority of heartbeat threads for management and API nodes.

    为管理和API节点设置心跳线程的调度策略和优先级。

    The syntax for setting this parameter is shown here:

    设置此参数的语法如下所示:

    HeartbeatThreadPriority = policy[, priority]
    
    policy:
      {FIFO | RR}
    

    When setting this parameter, you must specify a policy. This is one of FIFO (first in, first out) or RR (round robin). The policy value is followed optionally by the priority (an integer).

    设置此参数时,必须指定策略。这是fifo(先进先出)或rr(循环)之一。策略值后面是优先级(整数)。

  • TotalSendBufferMemory

    TotalSendBufferMemory内存

    Table 21.24 This table provides type and value information for the TotalSendBufferMemory management node configuration parameter

    表21.24此表提供TotalSendBufferMemory管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 0
    Range 256K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter is used to determine the total amount of memory to allocate on this node for shared send buffer memory among all configured transporters.

    此参数用于确定要在此节点上为所有已配置传输程序之间的共享发送缓冲区内存分配的内存总量。

    If this parameter is set, its minimum permitted value is 256KB; 0 indicates that the parameter has not been set. For more detailed information, see Section 21.3.3.13, “Configuring NDB Cluster Send Buffer Parameters”.

    如果设置了此参数,则其最小允许值为256KB;0表示尚未设置该参数。有关更多详细信息,请参阅第21.3.3.13节“配置ndb集群发送缓冲区参数”。

  • HeartbeatIntervalMgmdMgmd

    HeartBeatIntervalMGMDMGMD公司

    Table 21.25 This table provides type and value information for the HeartbeatIntervalMgmdMgmd management node configuration parameter

    表21.25此表提供HeartBeatIntervalMGMDMGMD管理节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 1500
    Range 100 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Specify the interval between heartbeat messages used to determine whether another management node is on contact with this one. The management node waits after 3 of these intervals to declare the connection dead; thus, the default setting of 1500 milliseconds causes the management node to wait for approximately 1600 ms before timing out.

    指定用于确定另一个管理节点是否与此管理节点联系的心跳消息之间的间隔。管理节点在这些间隔的3之后等待宣告连接死;因此,1500毫秒的默认设置使得管理节点在超时之前等待大约1600毫秒。

Note

After making changes in a management node's configuration, it is necessary to perform a rolling restart of the cluster for the new configuration to take effect.

在对管理节点的配置进行更改后,需要对群集执行滚动重新启动,以使新配置生效。

To add new management servers to a running NDB Cluster, it is also necessary to perform a rolling restart of all cluster nodes after modifying any existing config.ini files. For more information about issues arising when using multiple management nodes, see Section 21.1.7.10, “Limitations Relating to Multiple NDB Cluster Nodes”.

若要向运行的NDB集群添加新的管理服务器,还需要在修改现有的CONT.IN文件之后执行所有群集节点的滚动重新启动。有关使用多个管理节点时出现的问题的更多信息,请参阅第21.1.7.10节“与多个ndb群集节点相关的限制”。

21.3.3.6 Defining NDB Cluster Data Nodes

The [ndbd] and [ndbd default] sections are used to configure the behavior of the cluster's data nodes.

[ndbd]和[ndbd default]部分用于配置集群数据节点的行为。

[ndbd] and [ndbd default] are always used as the section names whether you are using ndbd or ndbmtd binaries for the data node processes.

无论数据节点进程使用的是ndbd或ndbmtd二进制文件,[ndbd]和[ndbd default]始终用作节名。

There are many parameters which control buffer sizes, pool sizes, timeouts, and so forth. The only mandatory parameter is either one of ExecuteOnComputer or HostName; this must be defined in the local [ndbd] section.

有许多参数控制缓冲区大小、池大小、超时等等。唯一必需的参数是executeoncomputer或hostname之一;必须在local[ndbd]部分中定义。

The parameter NoOfReplicas should be defined in the [ndbd default] section, as it is common to all Cluster data nodes. It is not strictly necessary to set NoOfReplicas, but it is good practice to set it explicitly.

参数noofreplicas应该在[ndbd default]部分定义,因为它对所有集群数据节点都是通用的。不必严格地设置noofreplicas,但最好明确地设置它。

Most data node parameters are set in the [ndbd default] section. Only those parameters explicitly stated as being able to set local values are permitted to be changed in the [ndbd] section. Where present, HostName, NodeId and ExecuteOnComputer must be defined in the local [ndbd] section, and not in any other section of config.ini. In other words, settings for these parameters are specific to one data node.

大多数数据节点参数在[ndbd default]部分设置。只允许在[ndbd]部分中更改那些明确声明为能够设置本地值的参数。如果存在,hostname、nodeid和executeoncomputer必须在local[ndbd]部分中定义,而不是在config.ini的任何其他部分中定义。换句话说,这些参数的设置特定于一个数据节点。

For those parameters affecting memory usage or buffer sizes, it is possible to use K, M, or G as a suffix to indicate units of 1024, 1024×1024, or 1024×1024×1024. (For example, 100K means 100 × 1024 = 102400.)

对于那些影响内存使用或缓冲区大小的参数,可以使用k、m或g作为后缀来指示1024、1024×1024或1024×1024×1024的单位。(例如,100K表示100×1024=102400。)

Parameter names and values are case-insensitive, unless used in a MySQL Server my.cnf or my.ini file, in which case they are case sensitive.

参数名和值不区分大小写,除非在mysql server my.cnf或my.ini文件中使用,否则它们区分大小写。

Information about configuration parameters specific to NDB Cluster Disk Data tables can be found later in this section (see Disk Data Configuration Parameters).

有关特定于ndb群集磁盘数据表的配置参数的信息,请参阅本节后面的内容(请参阅磁盘数据配置参数)。

All of these parameters also apply to ndbmtd (the multithreaded version of ndbd). Three additional data node configuration parameters—MaxNoOfExecutionThreads, ThreadConfig, and NoOfFragmentLogParts—apply to ndbmtd only; these have no effect when used with ndbd. For more information, see Multi-Threading Configuration Parameters (ndbmtd). See also Section 21.4.3, “ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)”.

所有这些参数也适用于ndbmtd(ndbd的多线程版本)。三个额外的数据节点配置参数maxNoofExecutionThreads、threadconfig和nooffFragmentLogParts仅适用于ndbmtd;这些参数与ndbd一起使用时无效。有关更多信息,请参阅多线程配置参数(ndbmtd)。另请参阅21.4.3节,“ndbmtd-ndb集群数据节点守护程序(多线程)”。

Restart types.  Information about the restart types used by the parameter descriptions in this section is shown in the following table:

重新启动类型。下表显示了有关本节中参数说明使用的重新启动类型的信息:

Table 21.26 NDB Cluster restart types

表21.26 ndb集群重启类型

Symbol Restart Type Description
N Node The parameter can be updated using a rolling restart (see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”)
S System All cluster nodes must be shut down completely, then restarted, to effect a change in this parameter
I Initial Data nodes must be restarted using the --initial option

Identifying data nodes.  The NodeId or Id value (that is, the data node identifier) can be allocated on the command line when the node is started or in the configuration file.

识别数据节点。node id或id值(即数据节点标识符)可以在节点启动时在命令行或配置文件中分配。

  • NodeId

    节点ID

    Table 21.27 This table provides type and value information for the NodeId data node configuration parameter

    表21.27提供了nodeid数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default [none]
    Range 1 - 48
    Restart Type IS

    A unique node ID is used as the node's address for all cluster internal messages. For data nodes, this is an integer in the range 1 to 48 inclusive. Each node in the cluster must have a unique identifier.

    唯一的节点id用作所有集群内部消息的节点地址。对于数据节点,这是一个介于1到48之间的整数。群集中的每个节点都必须具有唯一的标识符。

    NodeId is the only supported parameter name to use when identifying data nodes. (Id was removed in NDB 7.5.0.)

    nodeid是标识数据节点时唯一受支持的参数名。(在ndb 7.5.0中删除了id。)

  • ExecuteOnComputer

    执行计算机

    Table 21.28 This table provides type and value information for the ExecuteOnComputer data node configuration parameter

    表21.28此表提供ExecuteOnComputer数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name
    Default [none]
    Range ...
    Restart Type S

    This refers to the Id set for one of the computers defined in a [computer] section.

    这是指为[计算机]部分中定义的计算机之一设置的ID。

    Important

    This parameter is deprecated as of NDB 7.5.0, and is subject to removal in a future release. Use the HostName parameter instead.

    自ndb 7.5.0起,此参数已被弃用,并将在以后的版本中删除。请改用hostname参数。

  • HostName

    主机名

    Table 21.29 This table provides type and value information for the HostName data node configuration parameter

    表21.29此表提供主机名数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name or IP address
    Default localhost
    Range ...
    Restart Type N

    Specifying this parameter defines the hostname of the computer on which the data node is to reside. To specify a hostname other than localhost, either this parameter or ExecuteOnComputer is required.

    指定此参数定义数据节点所在计算机的主机名。要指定localhost以外的主机名,需要此参数或executeoncomputer。

  • ServerPort

    服务器端口

    Table 21.30 This table provides type and value information for the ServerPort data node configuration parameter

    表21.30此表提供服务器端口数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default [none]
    Range 1 - 64K
    Restart Type S

    Each node in the cluster uses a port to connect to other nodes. By default, this port is allocated dynamically in such a way as to ensure that no two nodes on the same host computer receive the same port number, so it should normally not be necessary to specify a value for this parameter.

    群集中的每个节点都使用一个端口连接到其他节点。默认情况下,动态分配此端口的方式是确保同一主机上没有两个节点接收到相同的端口号,因此通常不需要为此参数指定值。

    However, if you need to be able to open specific ports in a firewall to permit communication between data nodes and API nodes (including SQL nodes), you can set this parameter to the number of the desired port in an [ndbd] section or (if you need to do this for multiple data nodes) the [ndbd default] section of the config.ini file, and then open the port having that number for incoming connections from SQL nodes, API nodes, or both.

    但是,如果需要能够打开防火墙中的特定端口以允许数据节点和API节点(包括SQL节点)之间的通信,则可以在config.ini文件的[ndbd]部分或(如果需要对多个数据节点执行此操作)的[ndbd default]部分将此参数设置为所需端口的数量,然后为来自sql节点、api节点或两者的传入连接打开具有该号码的端口。

    Note

    Connections from data nodes to management nodes is done using the ndb_mgmd management port (the management server's PortNumber) so outgoing connections to that port from any data nodes should always be permitted.

    从数据节点到管理节点的连接是使用ndb-mgmd管理端口(管理服务器的端口号)完成的,因此应该始终允许从任何数据节点到该端口的传出连接。

  • TcpBind_INADDR_ANY

    在任何地方

    Setting this parameter to TRUE or 1 binds IP_ADDR_ANY so that connections can be made from anywhere (for autogenerated connections). The default is FALSE (0).

    将此参数设置为true或1绑定IP地址,以便可以从任何位置建立连接(对于自动生成的连接)。默认值为false(0)。

  • NodeGroup

    节点组

    Table 21.31 This table provides type and value information for the NodeGroup data node configuration parameter

    表21.31此表提供节点组数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units
    Default [none]
    Range 0 - 65536
    Restart Type IS

    This parameter can be used to assign a data node to a specific node group. It is read only when the cluster is started for the first time, and cannot be used to reassign a data node to a different node group online. It is generally not desirable to use this parameter in the [ndbd default] section of the config.ini file, and care must be taken not to assign nodes to node groups in such a way that an invalid numbers of nodes are assigned to any node groups.

    此参数可用于将数据节点分配给特定节点组。当群集首次启动时,它是只读的,不能用于将数据节点重新分配到其他联机节点组。通常不希望在config.ini文件的[ndbd default]部分使用此参数,并且必须注意不要将节点分配给节点组,以免将无效数量的节点分配给任何节点组。

    The NodeGroup parameter is chiefly intended for use in adding a new node group to a running NDB Cluster without having to perform a rolling restart. For this purpose, you should set it to 65536 (the maximum value). You are not required to set a NodeGroup value for all cluster data nodes, only for those nodes which are to be started and added to the cluster as a new node group at a later time. For more information, see Section 21.5.15.3, “Adding NDB Cluster Data Nodes Online: Detailed Example”.

    node group参数主要用于将新节点组添加到正在运行的ndb集群中,而无需执行滚动重新启动。为此,您应该将其设置为65536(最大值)。您不需要为所有群集数据节点设置node group值,只需要为以后要启动并作为新节点组添加到群集的节点设置nodegroup值。有关更多信息,请参阅第21.5.15.3节“联机添加ndb集群数据节点:详细示例”。

  • LocationDomainId

    位置域ID

    Table 21.32 This table provides type and value information for the LocationDomainId data node configuration parameter

    表21.32此表提供locationdomainid数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.4
    Type or units integer
    Default 0
    Range 0 - 16
    Restart Type S

    Assigns a data node to a specific availability domain (also known as an availability zone) within a cloud. By informing NDB which nodes are in which availability domains, performance can be improved in a cloud environment in the following ways:

    将数据节点分配给云中的特定可用性域(也称为可用性区域)。通过通知ndb哪些节点位于哪些可用性域中,可以通过以下方式提高云环境中的性能:

    • If requested data is not found on the same node, reads can be directed to another node in the same availability domain.

      如果在同一个节点上找不到请求的数据,则可以将读取定向到同一可用性域中的另一个节点。

    • Communication between nodes in different availability domains are guaranteed to use NDB transporters' WAN support without any further manual intervention.

      不同可用域中的节点之间的通信保证使用ndb transporters的广域网支持,而无需任何进一步的手动干预。

    • The transporter's group number can be based on which availability domain is used, such that also SQL and other API nodes communicate with local data nodes in the same availability domain whenever possible.

      传输程序的组号可以基于使用的可用性域,以便SQL和其他API节点尽可能与同一可用性域中的本地数据节点通信。

    • The arbitrator can be selected from an availability domain in which no data nodes are present, or, if no such availability domain can be found, from a third availability domain.

      仲裁器可以从不存在数据节点的可用性域中选择,或者,如果找不到这样的可用性域,则可以从第三个可用性域中选择。

    LocationDomainId takes an integer value between 0 and 16 inclusive, with 0 being the default; using 0 is the same as leaving the parameter unset.

    locationdomainid接受一个介于0和16之间(包括0和16)的整数值,默认值为0;使用0与不设置参数相同。

  • NoOfReplicas

    无皱襞

    Table 21.33 This table provides type and value information for the NoOfReplicas data node configuration parameter

    表21.33此表提供了noofreplicas数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 2
    Range 1 - 4
    Restart Type IS

    This global parameter can be set only in the [ndbd default] section, and defines the number of replicas for each table stored in the cluster. This parameter also specifies the size of node groups. A node group is a set of nodes all storing the same information.

    此全局参数只能在[ndbd default]部分中设置,并定义集群中存储的每个表的副本数。此参数还指定节点组的大小。节点组是存储相同信息的一组节点。

    Node groups are formed implicitly. The first node group is formed by the set of data nodes with the lowest node IDs, the next node group by the set of the next lowest node identities, and so on. By way of example, assume that we have 4 data nodes and that NoOfReplicas is set to 2. The four data nodes have node IDs 2, 3, 4 and 5. Then the first node group is formed from nodes 2 and 3, and the second node group by nodes 4 and 5. It is important to configure the cluster in such a manner that nodes in the same node groups are not placed on the same computer because a single hardware failure would cause the entire cluster to fail.

    节点组是隐式形成的。第一个节点组由具有最低节点id的数据节点集构成,下一个节点组由具有最低节点id的数据节点集构成,依此类推。举例来说,假设我们有4个数据节点,noofreplicas设置为2。这四个数据节点具有节点id 2、3、4和5。然后,第一节点组由节点2和3形成,第二节点组由节点4和5形成。配置集群时,必须确保同一节点组中的节点不位于同一台计算机上,因为一个硬件故障会导致整个集群失败。

    If no node IDs are provided, the order of the data nodes will be the determining factor for the node group. Whether or not explicit assignments are made, they can be viewed in the output of the management client's SHOW command.

    如果没有提供节点id,那么数据节点的顺序将是节点组的决定因素。无论是否进行显式分配,都可以在管理客户端的show命令的输出中查看它们。

    The default value for NoOfReplicas is 2. This is the recommended value for most production environments.

    noofreplicas的默认值是2。这是大多数生产环境的推荐值。

    Important

    While the maximum possible value for this parameter is 4, setting NoOfReplicas to a value greater than 2 is not supported in production.

    虽然此参数的最大可能值为4,但在生产中不支持将NOF副本设置为大于2的值。

    Warning

    Setting NoOfReplicas to 1 means that there is only a single copy of all Cluster data; in this case, the loss of a single data node causes the cluster to fail because there are no additional copies of the data stored by that node.

    将noofreplicas设置为1意味着所有集群数据只有一个副本;在这种情况下,丢失一个数据节点会导致集群失败,因为该节点存储的数据没有其他副本。

    The value for this parameter must divide evenly into the number of data nodes in the cluster. For example, if there are two data nodes, then NoOfReplicas must be equal to either 1 or 2, since 2/3 and 2/4 both yield fractional values; if there are four data nodes, then NoOfReplicas must be equal to 1, 2, or 4.

    此参数的值必须平均除以群集中的数据节点数。例如,如果有两个数据节点,则noofreplicas必须等于1或2,因为2/3和2/4都会产生零值;如果有四个数据节点,则noofreplicas必须等于1、2或4。

  • DataDir

    数据目录

    Table 21.34 This table provides type and value information for the DataDir data node configuration parameter

    表21.34此表提供datadir数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units path
    Default .
    Range ...
    Restart Type IN

    This parameter specifies the directory where trace files, log files, pid files and error logs are placed.

    此参数指定放置跟踪文件、日志文件、PID文件和错误日志的目录。

    The default is the data node process working directory.

    默认为数据节点进程工作目录。

  • FileSystemPath

    文件系统移情

    Table 21.35 This table provides type and value information for the FileSystemPath data node configuration parameter

    表21.35此表提供了fileSystemSynch数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units path
    Default DataDir
    Range ...
    Restart Type IN

    This parameter specifies the directory where all files created for metadata, REDO logs, UNDO logs (for Disk Data tables), and data files are placed. The default is the directory specified by DataDir.

    此参数指定放置为元数据创建的所有文件、重做日志、撤消日志(用于磁盘数据表)和数据文件的目录。默认为datadir指定的目录。

    Note

    This directory must exist before the ndbd process is initiated.

    此目录必须在NdBD进程启动之前存在。

    The recommended directory hierarchy for NDB Cluster includes /var/lib/mysql-cluster, under which a directory for the node's file system is created. The name of this subdirectory contains the node ID. For example, if the node ID is 2, this subdirectory is named ndb_2_fs.

    ndb cluster的推荐目录层次结构包括/var/lib/mysql cluster,在其下为节点的文件系统创建一个目录。此子目录的名称包含节点ID。例如,如果节点ID为2,则此子目录命名为ndb_2_fs。

  • BackupDataDir

    回溯

    Table 21.36 This table provides type and value information for the BackupDataDir data node configuration parameter

    表21.36此表提供backupdatedir数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units path
    Default [see text]
    Range ...
    Restart Type IN

    This parameter specifies the directory in which backups are placed.

    此参数指定放置备份的目录。

    Important

    The string '/BACKUP' is always appended to this value. For example, if you set the value of BackupDataDir to /var/lib/cluster-data, then all backups are stored under /var/lib/cluster-data/BACKUP. This also means that the effective default backup location is the directory named BACKUP under the location specified by the FileSystemPath parameter.

    字符串'/backup'总是附加到此值。例如,如果将backupdatedir的值设置为/var/lib/cluster data,则所有备份都存储在/var/lib/cluster data/backup下。这也意味着,有效的默认备份位置是filesystem移情参数指定的位置下名为backup的目录。

Data Memory, Index Memory, and String Memory

DataMemory and IndexMemory are [ndbd] parameters specifying the size of memory segments used to store the actual records and their indexes. In setting values for these, it is important to understand how DataMemory and IndexMemory are used, as they usually need to be updated to reflect actual usage by the cluster.

datamemory和indexmemory是指定用于存储实际记录及其索引的内存段大小的[ndbd]参数。在设置这些值时,了解如何使用datamemory和indexmemory非常重要,因为它们通常需要更新以反映集群的实际使用情况。

Note

IndexMemory is deprecated in NDB 7.6, and subject to removal in a future version of NDB Cluster. See the descriptions that follow for further information.

indexmemory在ndb 7.6中被弃用,并将在ndb集群的未来版本中被删除。有关更多信息,请参阅下面的说明。

  • DataMemory

    数据存储器

    Table 21.37 This table provides type and value information for the DataMemory data node configuration parameter

    表21.37此表提供数据存储器数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 80M
    Range 1M - 1T
    Restart Type N
    Version (or later) NDB 7.6.2
    Type or units bytes
    Default 98M
    Range 1M - 1T
    Restart Type N

    This parameter defines the amount of space (in bytes) available for storing database records. The entire amount specified by this value is allocated in memory, so it is extremely important that the machine has sufficient physical memory to accommodate it.

    此参数定义可用于存储数据库记录的空间量(字节)。这个值指定的整个数量都分配在内存中,因此机器有足够的物理内存来容纳它是非常重要的。

    The memory allocated by DataMemory is used to store both the actual records and indexes. There is a 16-byte overhead on each record; an additional amount for each record is incurred because it is stored in a 32KB page with 128 byte page overhead (see below). There is also a small amount wasted per page due to the fact that each record is stored in only one page.

    datamemory分配的内存用于存储实际记录和索引。每个记录都有16字节的开销;每个记录都会产生额外的开销,因为它存储在一个32KB的页面中,页面开销为128字节(见下文)。由于每个记录只存储在一个页面中,因此每个页面也有少量浪费。

    For variable-size table attributes, the data is stored on separate data pages, allocated from DataMemory. Variable-length records use a fixed-size part with an extra overhead of 4 bytes to reference the variable-size part. The variable-size part has 2 bytes overhead plus 2 bytes per attribute.

    对于可变大小的表属性,数据存储在从datamemory分配的单独数据页上。可变长度记录使用具有4字节额外开销的固定大小部分来引用可变大小部分。可变大小部分有2个字节的开销加上每个属性2个字节。

    The maximum record size is 14000 bytes.

    最大记录大小为14000字节。

    In NDB 7.5 (and earlier), the memory space defined by DataMemory is also used to store ordered indexes, which use about 10 bytes per record. Each table row is represented in the ordered index. A common error among users is to assume that all indexes are stored in the memory allocated by IndexMemory, but this is not the case: Only primary key and unique hash indexes use this memory; ordered indexes use the memory allocated by DataMemory. However, creating a primary key or unique hash index also creates an ordered index on the same keys, unless you specify USING HASH in the index creation statement. This can be verified by running ndb_desc -d db_name table_name.

    在ndb 7.5(及更早版本)中,datamemory定义的内存空间还用于存储有序索引,每个记录大约使用10个字节。每个表行都在有序索引中表示。用户之间的一个常见错误是假设所有索引都存储在indexmemory分配的内存中,但事实并非如此:只有主键和唯一散列索引使用此内存;有序索引使用datamemory分配的内存。但是,除非在索引创建语句中指定使用哈希,否则创建主键或唯一哈希索引也会在相同键上创建有序索引。这可以通过运行ndb_desc-d db_name table_name来验证。

    In NDB 7.6, resources assigned to DataMemory are used for storing all data and indexes; any memory configured as IndexMemory is automatically added to that used by DataMemory to form a common resource pool.

    在ndb 7.6中,分配给data memory的资源用于存储所有数据和索引;配置为indexmemory的任何内存都会自动添加到datamemory使用的内存中,以形成一个公共资源池。

    Currently, NDB Cluster can use a maximum of 512 MB for hash indexes per partition, which means in some cases it is possible to get Table is full errors in MySQL client applications even when ndb_mgm -e "ALL REPORT MEMORYUSAGE" shows significant free DataMemory. This can also pose a problem with data node restarts on nodes that are heavily loaded with data.

    当前,NDB集群可以为每个分区使用最多512 MB的哈希索引,这意味着在某些情况下,即使在NdBMGMG-E“所有报告内存使用”都显示了显著的空闲数据内存时,也可以获得表在MySQL客户端应用程序中的完全错误。这也可能会导致数据节点在数据负载很重的节点上重新启动时出现问题。

    In NDB 7.5.4 and later, you can control the number of partitions per local data manager for a given table by setting the NDB_TABLE option PARTITION_BALANCE to one of the values FOR_RA_BY_LDM, FOR_RA_BY_LDM_X_2, FOR_RA_BY_LDM_X_3, or FOR_RA_BY_LDM_X_4, for 1, 2, 3, or 4 partitions per LDM, respectively, when creating the table (see Section 13.1.18.10, “Setting NDB_TABLE Options”).

    在NDB 7.5.4和更高版本中,您可以通过将NdBytable选项分区分区平衡设置为每一个LDM的1, 2, 3个或4个分区,将NdByTable选项分区分区平衡设置为一个值为OrrasyByyLLDM、FrasRayByLyLDMxx2、FrasRayByLyLMyxx3、或FrYrRayByLyLMxx4的值之一。创建表格时(请参阅第13.1.18.10节“设置ndb_表格选项”)。

    Note

    In previous versions of NDB Cluster it was possible to create extra partitions for NDB Cluster tables and thus have more memory available for hash indexes by using the MAX_ROWS option for CREATE TABLE. While still supported for backward compatibility, using MAX_ROWS for this purpose is deprecated beginning with NDB 7.5.4, where you should use PARTITION_BALANCE instead.

    在以前的ndb cluster版本中,可以为ndb cluster表创建额外的分区,因此可以使用用于创建表的max_rows选项为散列索引提供更多的内存。虽然仍然支持向后兼容性,但使用NdxRead是从NDB 7.5.4开始禁止使用的,在这里您应该使用StutyTyCalm来代替。

    You can also use the MinFreePct configuration parameter to help avoid problems with node restarts.

    还可以使用minfreepct配置参数帮助避免节点重新启动时出现问题。

    The memory space allocated by DataMemory consists of 32KB pages, which are allocated to table fragments. Each table is normally partitioned into the same number of fragments as there are data nodes in the cluster. Thus, for each node, there are the same number of fragments as are set in NoOfReplicas.

    datamemory分配的内存空间由32kb页组成,这些页被分配给表片段。每个表通常被划分为与集群中的数据节点相同数量的片段。因此,对于每个节点,都有与noofreplicas中设置的相同数量的片段。

    Once a page has been allocated, it is currently not possible to return it to the pool of free pages, except by deleting the table. (This also means that DataMemory pages, once allocated to a given table, cannot be used by other tables.) Performing a data node recovery also compresses the partition because all records are inserted into empty partitions from other live nodes.

    一旦分配了一个页面,当前无法将其返回到可用页面池,除非删除表。(这也意味着数据内存页一旦分配给给定的表,就不能被其他表使用。)执行数据节点恢复也会压缩分区,因为所有记录都从其他活动节点插入到空分区中。

    The DataMemory memory space also contains UNDO information: For each update, a copy of the unaltered record is allocated in the DataMemory. There is also a reference to each copy in the ordered table indexes. Unique hash indexes are updated only when the unique index columns are updated, in which case a new entry in the index table is inserted and the old entry is deleted upon commit. For this reason, it is also necessary to allocate enough memory to handle the largest transactions performed by applications using the cluster. In any case, performing a few large transactions holds no advantage over using many smaller ones, for the following reasons:

    datamemory内存空间还包含撤消信息:对于每次更新,都会在datamemory中分配未更改记录的副本。有序表索引中也有对每个副本的引用。只有在更新唯一索引列时,才会更新唯一哈希索引,在这种情况下,索引表中会插入一个新项,并在提交时删除旧项。因此,还需要分配足够的内存来处理使用集群的应用程序执行的最大事务。在任何情况下,执行一些大型事务都比不上使用许多小型事务,原因如下:

    • Large transactions are not any faster than smaller ones

      大交易并不比小交易快

    • Large transactions increase the number of operations that are lost and must be repeated in event of transaction failure

      大型事务增加了丢失的操作数,并且在事务失败时必须重复这些操作

    • Large transactions use more memory

      大型事务使用更多内存

    In NDB 7.5 (and earlier), the default value for DataMemory is 80MB; beginning with NDB 7.6.2, this is 98MB. The minimum value is 1MB. There is no maximum size, but in reality the maximum size has to be adapted so that the process does not start swapping when the limit is reached. This limit is determined by the amount of physical RAM available on the machine and by the amount of memory that the operating system may commit to any one process. 32-bit operating systems are generally limited to 2−4GB per process; 64-bit operating systems can use more. For large databases, it may be preferable to use a 64-bit operating system for this reason.

    在ndb 7.5(及更早版本)中,datamemory的默认值是80mb;从ndb7.6.2开始,这个值是98mb。最小值为1MB。没有最大的尺寸,但实际上,最大尺寸必须被调整,使得当达到极限时,该过程不开始交换。此限制由计算机上可用的物理RAM数量和操作系统可以提交给任何一个进程的内存数量决定。32位操作系统通常限制为每个进程2-4GB;64位操作系统可以使用更多。对于大型数据库,出于这个原因,最好使用64位操作系统。

  • IndexMemory

    索引存储器

    Table 21.38 This table provides type and value information for the IndexMemory data node configuration parameter

    表21.38此表提供IndexMemory数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 18M
    Range 1M - 1T
    Restart Type N
    Version (or later) NDB 7.6.2
    Type or units bytes
    Default 0
    Range 1M - 1T
    Restart Type N

    In NDB 7.5 and earlier, this parameter controls the amount of storage used for hash indexes in NDB Cluster. Hash indexes are always used for primary key indexes, unique indexes, and unique constraints. When defining a primary key or a unique index, two indexes are created, one of which is a hash index used for all tuple accesses as well as lock handling. This index is also used to enforce unique constraints.

    在ndb 7.5和更早版本中,此参数控制ndb集群中用于哈希索引的存储量。哈希索引始终用于主键索引、唯一索引和唯一约束。定义主键或唯一索引时,将创建两个索引,其中一个是用于所有元组访问和锁处理的哈希索引。此索引还用于强制实施唯一约束。

    Beginning with NDB 7.6.2, the IndexMemory parameter is deprecated (and subject to future removal); any any memory assigned to IndexMemory is allocated instead to the same pool as DataMemory, which becomes solely responsible for all resources needed for storing data and indexes in memory. In NDB 7.6.2 and later, the use of IndexMemory in the cluster configuration file triggers a warning from the management server.

    从ndb 7.6.2开始,indexMemory参数被弃用(以后可能会删除);分配给indexMemory的任何内存都被分配给与dataMemory相同的池,dataMemory将全权负责在内存中存储数据和索引所需的所有资源。在ndb 7.6.2和更高版本中,在集群配置文件中使用indexmemory会触发管理服务器的警告。

    You can estimate the size of a hash index using this formula:

    可以使用以下公式估计哈希索引的大小:

      size  = ( (fragments * 32K) + (rows * 18) )
              * replicas
    
              

    fragments is the number of fragments, replicas is the number of replicas (normally 2), and rows is the number of rows. If a table has one million rows, 8 fragments, and 2 replicas, the expected index memory usage is calculated as shown here:

    片段是片段的数量,副本是副本的数量(通常为2),行是行的数量。如果一个表有一百万行、8个片段和2个副本,则按如下所示计算预期的索引内存使用率:

      ((8 * 32K) + (1000000 * 18)) * 2 = ((8 * 32768) + (1000000 * 18)) * 2
      = (262144 + 18000000) * 2
      = 18262144 * 2 = 36524288 bytes = ~35MB
    

    Index statistics for ordered indexes (when these are enabled) are stored in the mysql.ndb_index_stat_sample table. Since this table has a hash index, this adds to index memory usage. An upper bound to the number of rows for a given ordered index can be calculated as follows:

    有序索引的索引统计信息(启用这些索引时)存储在mysql.ndb_index_stat_示例表中。由于此表具有哈希索引,这会增加索引内存的使用量。给定有序索引的行数上限可以按如下方式计算:

      sample_size= key_size + ((key_attributes + 1) * 4)
    
      sample_rows = IndexStatSaveSize
                    * ((0.01 * IndexStatSaveScale * log2(rows * sample_size)) + 1)
                    / sample_size
    

    In the preceding formula, key_size is the size of the ordered index key in bytes, key_attributes is the number ot attributes in the ordered index key, and rows is the number of rows in the base table.

    在前面的公式中,key_size是有序索引键的字节大小,key_attributes是有序索引键中的属性数,rows是基表中的行数。

    Assume that table t1 has 1 million rows and an ordered index named ix1 on two four-byte integers. Assume in addition that IndexStatSaveSize and IndexStatSaveScale are set to their default values (32K and 100, respectively). Using the previous 2 formulas, we can calculate as follows:

    假设表T1有100万行,并且在两个四字节整数上有一个名为ix1的有序索引。此外,假设indexStatSaveSize和indexStatSaveScale设置为其默认值(分别为32K和100)。使用前面的两个公式,我们可以计算如下:

      sample_size = 8  + ((1 + 2) * 4) = 20 bytes
    
      sample_rows = 32K
                    * ((0.01 * 100 * log2(1000000*20)) + 1)
                    / 20
                    = 32768 * ( (1 * ~16.811) +1) / 20
                    = 32768 * ~17.811 / 20
                    = ~29182 rows
    

    The expected index memory usage is thus 2 * 18 * 29182 = ~1050550 bytes.

    因此,预期的索引内存使用量是2*18*29182=~1050550字节。

    Prior to NDB 7.6.2, the default value for IndexMemory is 18MB and the minimum is 1 MB; in NDB 7.6.2 and later, the minimum and default vaue for this parameter is 0 (zero).

    在ndb 7.6.2之前,indexmemory的默认值是18mb,最小值是1mb;在ndb7.6.2和更高版本中,此参数的最小值和默认值是0(零)。

  • StringMemory

    字符串内存

    Table 21.39 This table provides type and value information for the StringMemory data node configuration parameter

    表21.39此表提供StringMemory数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units % or bytes
    Default 25
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type S

    This parameter determines how much memory is allocated for strings such as table names, and is specified in an [ndbd] or [ndbd default] section of the config.ini file. A value between 0 and 100 inclusive is interpreted as a percent of the maximum default value, which is calculated based on a number of factors including the number of tables, maximum table name size, maximum size of .FRM files, MaxNoOfTriggers, maximum column name size, and maximum default column value.

    此参数确定为表名等字符串分配了多少内存,并在config.ini文件的[ndbd]或[ndbd default]节中指定。0和100之间包含的值被解释为最大缺省值的百分之一,这是基于许多因素计算的,包括表的数量、最大表名大小、FRM文件的最大大小、Max NoF触发器、最大列名称大小和最大默认列值。

    A value greater than 100 is interpreted as a number of bytes.

    大于100的值被解释为字节数。

    The default value is 25—that is, 25 percent of the default maximum.

    默认值为25,即默认最大值的25%。

    Under most circumstances, the default value should be sufficient, but when you have a great many NDB tables (1000 or more), it is possible to get Error 773 Out of string memory, please modify StringMemory config parameter: Permanent error: Schema error, in which case you should increase this value. 25 (25 percent) is not excessive, and should prevent this error from recurring in all but the most extreme conditions.

    在大多数情况下,默认值应该足够,但是当您有大量的ndb表(1000个或更多)时,有可能从字符串内存中获取错误773,请修改string memory配置参数:permanent error:schema error,在这种情况下,您应该增加此值。25(25%)并不过分,应该防止这种错误在除最极端情况外的所有情况下再次出现。

The following example illustrates how memory is used for a table. Consider this table definition:

下面的示例演示如何将内存用于表。考虑下这个表定义:

CREATE TABLE example (
  a INT NOT NULL,
  b INT NOT NULL,
  c INT NOT NULL,
  PRIMARY KEY(a),
  UNIQUE(b)
) ENGINE=NDBCLUSTER;

For each record, there are 12 bytes of data plus 12 bytes overhead. Having no nullable columns saves 4 bytes of overhead. In addition, we have two ordered indexes on columns a and b consuming roughly 10 bytes each per record. There is a primary key hash index on the base table using roughly 29 bytes per record. The unique constraint is implemented by a separate table with b as primary key and a as a column. This other table consumes an additional 29 bytes of index memory per record in the example table as well 8 bytes of record data plus 12 bytes of overhead.

对于每个记录,有12个字节的数据加上12个字节的开销。没有可为空的列可以节省4字节的开销。此外,在列A和列B上有两个顺序索引,每条记录大约占用10个字节。基表上有一个主键散列索引,每个记录大约使用29个字节。唯一约束是由一个单独的表实现的,该表的主键是B,列是A。另一个表为示例表中的每条记录额外消耗29字节的索引内存,以及8字节的记录数据和12字节的开销。

Thus, for one million records, we need 58MB for index memory to handle the hash indexes for the primary key and the unique constraint. We also need 64MB for the records of the base table and the unique index table, plus the two ordered index tables.

因此,对于一百万条记录,我们需要58MB的索引内存来处理主键和唯一约束的哈希索引。我们还需要64MB用于基本表和唯一索引表的记录,以及两个有序索引表。

You can see that hash indexes takes up a fair amount of memory space; however, they provide very fast access to the data in return. They are also used in NDB Cluster to handle uniqueness constraints.

您可以看到散列索引占用了相当大的内存空间;但是,作为回报,它们提供了对数据的快速访问。它们还用于ndb集群中,以处理唯一性约束。

Currently, the only partitioning algorithm is hashing and ordered indexes are local to each node. Thus, ordered indexes cannot be used to handle uniqueness constraints in the general case.

目前,唯一的分区算法是散列,有序索引是每个节点的本地索引。因此,在一般情况下,不能使用有序索引来处理唯一性约束。

An important point for both IndexMemory and DataMemory is that the total database size is the sum of all data memory and all index memory for each node group. Each node group is used to store replicated information, so if there are four nodes with two replicas, there will be two node groups. Thus, the total data memory available is 2 × DataMemory for each data node.

index memory和datamemory的一个重要点是,数据库的总大小是每个节点组的所有数据内存和所有索引内存的总和。每个节点组用于存储复制的信息,因此如果有四个节点具有两个副本,则将有两个节点组。因此,每个数据节点的总可用数据内存为2×data memory。

It is highly recommended that DataMemory and IndexMemory be set to the same values for all nodes. Data distribution is even over all nodes in the cluster, so the maximum amount of space available for any node can be no greater than that of the smallest node in the cluster.

强烈建议将所有节点的datamemory和indexmemory设置为相同的值。数据分布甚至集中在集群中的所有节点上,因此对于任何节点可用的最大空间量可以不大于集群中最小节点的空间量。

DataMemory (and in NDB 7.5 and earlier IndexMemory) can be changed, but decreasing it can be risky; doing so can easily lead to a node or even an entire NDB Cluster that is unable to restart due to there being insufficient memory space. Increases should be acceptable, but it is recommended that such upgrades are performed in the same manner as a software upgrade, beginning with an update of the configuration file, and then restarting the management server followed by restarting each data node in turn.

数据内存(以及在ndb 7.5和更早版本的indexmemory中)可以更改,但降低它可能会有风险;这样做很容易导致节点甚至整个ndb集群由于内存空间不足而无法重新启动。增加应该是可以接受的,但建议以与软件升级相同的方式执行此类升级,首先更新配置文件,然后重新启动管理服务器,然后依次重新启动每个数据节点。

MinFreePct.  A proportion (5% by default) of data node resources including DataMemory (and in NDB 7.5 and earlier, IndexMemory) is kept in reserve to insure that the data node does not exhaust its memory when performing a restart. This can be adjusted using the MinFreePct data node configuration parameter (default 5).

明弗里普特。数据节点资源(包括数据内存(在ndb 7.5及更早版本中,是indexmemory)的一部分(默认为5%)被保留下来,以确保数据节点在执行重新启动时不会耗尽其内存。这可以使用minfreepct数据节点配置参数(默认值5)进行调整。

Table 21.40 This table provides type and value information for the MinFreePct data node configuration parameter

表21.40此表提供minfreepct数据节点配置参数的类型和值信息

Property Value
Version (or later) NDB 7.5.0
Type or units unsigned
Default 5
Range 0 - 100
Restart Type N

Updates do not increase the amount of index memory used. Inserts take effect immediately; however, rows are not actually deleted until the transaction is committed.

更新不会增加索引内存的使用量。插入立即生效;但是,在提交事务之前,实际上不会删除行。

Transaction parameters.  The next few [ndbd] parameters that we discuss are important because they affect the number of parallel transactions and the sizes of transactions that can be handled by the system. MaxNoOfConcurrentTransactions sets the number of parallel transactions possible in a node. MaxNoOfConcurrentOperations sets the number of records that can be in update phase or locked simultaneously.

事务参数。接下来我们讨论的几个[ndbd]参数很重要,因为它们影响并行事务的数量和系统可以处理的事务的大小。maxNoofConcurrentTransactions设置节点中可能的并行事务数。MaxNoofConcurrentOperations设置可以处于更新阶段或同时锁定的记录数。

Both of these parameters (especially MaxNoOfConcurrentOperations) are likely targets for users setting specific values and not using the default value. The default value is set for systems using small transactions, to ensure that these do not use excessive memory.

这两个参数(特别是maxNoofConcurrentOperations)都可能是设置特定值而不使用默认值的用户的目标。为使用小事务的系统设置默认值,以确保这些系统不会使用过多内存。

MaxDMLOperationsPerTransaction sets the maximum number of DML operations that can be performed in a given transaction.

MaxMLMultuurSpEx事务设置可在给定事务中执行的最大ML操作数。

  • MaxNoOfConcurrentTransactions

    MaxNoofConcurrentTransactions

    Table 21.41 This table provides type and value information for the MaxNoOfConcurrentTransactions data node configuration parameter

    表21.41此表提供maxNoofConcurrentTransactions数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 4096
    Range 32 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Each cluster data node requires a transaction record for each active transaction in the cluster. The task of coordinating transactions is distributed among all of the data nodes. The total number of transaction records in the cluster is the number of transactions in any given node times the number of nodes in the cluster.

    每个集群数据节点都需要集群中每个活动事务的事务记录。协调事务的任务分布在所有数据节点之间。群集中的事务记录总数是任何给定节点中的事务数乘以群集中的节点数。

    Transaction records are allocated to individual MySQL servers. Each connection to a MySQL server requires at least one transaction record, plus an additional transaction object per table accessed by that connection. This means that a reasonable minimum for the total number of transactions in the cluster can be expressed as

    事务记录被分配给单独的mysql服务器。到mysql服务器的每个连接都需要至少一个事务记录,外加该连接访问的每个表的一个附加事务对象。这意味着集群中事务总数的合理最小值可以表示为

    TotalNoOfConcurrentTransactions =
        (maximum number of tables accessed in any single transaction + 1)
        * number of SQL nodes
    

    Suppose that there are 10 SQL nodes using the cluster. A single join involving 10 tables requires 11 transaction records; if there are 10 such joins in a transaction, then 10 * 11 = 110 transaction records are required for this transaction, per MySQL server, or 110 * 10 = 1100 transaction records total. Each data node can be expected to handle TotalNoOfConcurrentTransactions / number of data nodes. For an NDB Cluster having 4 data nodes, this would mean setting MaxNoOfConcurrentTransactions on each data node to 1100 / 4 = 275. In addition, you should provide for failure recovery by ensuring that a single node group can accommodate all concurrent transactions; in other words, that each data node's MaxNoOfConcurrentTransactions is sufficient to cover a number of transactions equal to TotalNoOfConcurrentTransactions / number of node groups. If this cluster has a single node group, then MaxNoOfConcurrentTransactions should be set to 1100 (the same as the total number of concurrent transactions for the entire cluster).

    假设有10个sql节点使用集群。一个包含10个表的连接需要11个事务记录;如果一个事务中有10个这样的连接,那么每个mysql服务器需要10*11=110个事务记录,或者总共110*10=1100个事务记录。可以期望每个数据节点处理totalNoofConcurrentTransactions/数据节点数。对于具有4个数据节点的ndb集群,这意味着将每个数据节点上的maxNoofConcurrentTransactions设置为1100/4=275。此外,您应该通过确保单个节点组可以容纳所有并发事务来提供故障恢复;换句话说,每个数据节点的maxNoofConcurrentTransactions足以覆盖相当于totalNoofConcurrentTransactions/节点组数目的事务数。如果这个集群有一个单节点组,那么maxNoofConcurrentTransactions应该设置为1100(与整个集群的并发事务总数相同)。

    In addition, each transaction involves at least one operation; for this reason, the value set for MaxNoOfConcurrentTransactions should always be no more than the value of MaxNoOfConcurrentOperations.

    此外,每个事务至少涉及一个操作;因此,为maxNoofConcurrentTransactions设置的值应始终不大于maxNoofConcurrentOperations的值。

    This parameter must be set to the same value for all cluster data nodes. This is due to the fact that, when a data node fails, the oldest surviving node re-creates the transaction state of all transactions that were ongoing in the failed node.

    对于所有群集数据节点,此参数必须设置为相同的值。这是因为,当一个数据节点发生故障时,最旧的幸存节点会重新创建故障节点中正在进行的所有事务的事务状态。

    It is possible to change this value using a rolling restart, but the amount of traffic on the cluster must be such that no more transactions occur than the lower of the old and new levels while this is taking place.

    可以使用滚动重新启动更改此值,但群集上的通信量必须确保在发生此操作时,不会发生比旧级别和新级别更低的事务。

    The default value is 4096.

    默认值为4096。

  • MaxNoOfConcurrentOperations

    MaxNoofConcurrentOperations公司

    Table 21.42 This table provides type and value information for the MaxNoOfConcurrentOperations data node configuration parameter

    表21.42此表提供maxNoofConcurrentOperations数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 32K
    Range 32 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    It is a good idea to adjust the value of this parameter according to the size and number of transactions. When performing transactions which involve only a few operations and records, the default value for this parameter is usually sufficient. Performing large transactions involving many records usually requires that you increase its value.

    最好根据事务的大小和数量调整此参数的值。当执行只涉及少量操作和记录的事务时,此参数的默认值通常就足够了。执行涉及许多记录的大型事务通常需要增加其值。

    Records are kept for each transaction updating cluster data, both in the transaction coordinator and in the nodes where the actual updates are performed. These records contain state information needed to find UNDO records for rollback, lock queues, and other purposes.

    在事务协调器和执行实际更新的节点中,为每个事务更新集群数据保留记录。这些记录包含查找用于回滚、锁定队列和其他目的的撤消记录所需的状态信息。

    This parameter should be set at a minimum to the number of records to be updated simultaneously in transactions, divided by the number of cluster data nodes. For example, in a cluster which has four data nodes and which is expected to handle one million concurrent updates using transactions, you should set this value to 1000000 / 4 = 250000. To help provide resiliency against failures, it is suggested that you set this parameter to a value that is high enough to permit an individual data node to handle the load for its node group. In other words, you should set the value equal to total number of concurrent operations / number of node groups. (In the case where there is a single node group, this is the same as the total number of concurrent operations for the entire cluster.)

    此参数应至少设置为事务中要同时更新的记录数除以群集数据节点数。例如,在一个有四个数据节点的集群中,它预计将使用事务处理一百万个并发更新,您应该将该值设置为1000000/4=250000。为了帮助提供故障恢复能力,建议您将此参数设置为足以允许单个数据节点处理其节点组的负载的值。换句话说,您应该将该值设置为等于并发操作总数/节点组数。(在存在单个节点组的情况下,这与整个集群的并发操作总数相同。)

    Because each transaction always involves at least one operation, the value of MaxNoOfConcurrentOperations should always be greater than or equal to the value of MaxNoOfConcurrentTransactions.

    由于每个事务始终至少涉及一个操作,因此maxNoofConcurrentOperations的值应始终大于或等于maxNoofConcurrentTransactions的值。

    Read queries which set locks also cause operation records to be created. Some extra space is allocated within individual nodes to accommodate cases where the distribution is not perfect over the nodes.

    设置锁的读取查询也会导致创建操作记录。在单个节点内分配一些额外的空间,以适应在节点上分布不完美的情况。

    When queries make use of the unique hash index, there are actually two operation records used per record in the transaction. The first record represents the read in the index table and the second handles the operation on the base table.

    当查询使用唯一的散列索引时,事务中每个记录实际上使用两个操作记录。第一条记录表示索引表中的读取,第二条记录处理基表上的操作。

    The default value is 32768.

    默认值为32768。

    This parameter actually handles two values that can be configured separately. The first of these specifies how many operation records are to be placed with the transaction coordinator. The second part specifies how many operation records are to be local to the database.

    这个参数实际上处理两个可以单独配置的值。其中的第一个指定要与事务协调器一起放置多少操作记录。第二部分指定数据库本地的操作记录数。

    A very large transaction performed on an eight-node cluster requires as many operation records in the transaction coordinator as there are reads, updates, and deletes involved in the transaction. However, the operation records of the are spread over all eight nodes. Thus, if it is necessary to configure the system for one very large transaction, it is a good idea to configure the two parts separately. MaxNoOfConcurrentOperations will always be used to calculate the number of operation records in the transaction coordinator portion of the node.

    在八节点集群上执行的非常大的事务需要事务协调器中的操作记录数量与事务中涉及的读取、更新和删除数量一样多。但是,的操作记录分布在所有八个节点上。因此,如果需要为一个非常大的事务配置系统,最好分别配置这两个部分。maxNoofConcurrentOperations将始终用于计算节点事务协调器部分中的操作记录数。

    It is also important to have an idea of the memory requirements for operation records. These consume about 1KB per record.

    了解操作记录的内存需求也很重要。每个记录大约消耗1KB。

  • MaxNoOfLocalOperations

    Maxnooflocaloperations公司

    Table 21.43 This table provides type and value information for the MaxNoOfLocalOperations data node configuration parameter

    表21.43此表提供maxnooflocaloperations数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default UNDEFINED
    Range 32 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    By default, this parameter is calculated as 1.1 × MaxNoOfConcurrentOperations. This fits systems with many simultaneous transactions, none of them being very large. If there is a need to handle one very large transaction at a time and there are many nodes, it is a good idea to override the default value by explicitly specifying this parameter.

    默认情况下,此参数计算为1.1×MaxNoofConcurrentOperations。这适合于同时处理许多事务的系统,它们都不是很大的。如果一次需要处理一个非常大的事务,并且有许多节点,那么最好通过显式指定此参数来覆盖默认值。

  • MaxDMLOperationsPerTransaction

    最大坡度传输

    Table 21.44 This table provides type and value information for the MaxDMLOperationsPerTransaction data node configuration parameter

    表21.44此表提供MaxDmloperationsPerTransaction数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units operations (DML)
    Default 4294967295
    Range 32 - 4294967295
    Restart Type N

    This parameter limits the size of a transaction. The transaction is aborted if it requires more than this many DML operations. The minimum number of operations per transaction is 32; however, you can set MaxDMLOperationsPerTransaction to 0 to disable any limitation on the number of DML operations per transaction. The maximum (and default) is 4294967295.

    此参数限制事务的大小。如果事务需要的DML操作超过此数量,则将中止该事务。每个事务的最小操作数为32;但是,可以将maxdmloperationspertransaction设置为0,以禁用对每个事务的dml操作数的任何限制。最大值(默认值)为4294967295。

Transaction temporary storage.  The next set of [ndbd] parameters is used to determine temporary storage when executing a statement that is part of a Cluster transaction. All records are released when the statement is completed and the cluster is waiting for the commit or rollback.

事务临时存储。下一组[ndbd]参数用于在执行作为集群事务一部分的语句时确定临时存储。当语句完成且集群正在等待提交或回滚时,将释放所有记录。

The default values for these parameters are adequate for most situations. However, users with a need to support transactions involving large numbers of rows or operations may need to increase these values to enable better parallelism in the system, whereas users whose applications require relatively small transactions can decrease the values to save memory.

这些参数的默认值适用于大多数情况。但是,需要支持涉及大量行或操作的事务的用户可能需要增加这些值,以便在系统中实现更好的并行性,而应用程序需要相对较小事务的用户可以减少这些值以节省内存。

  • MaxNoOfConcurrentIndexOperations

    MaxNoofConcurrentIndexOperations操作

    Table 21.45 This table provides type and value information for the MaxNoOfConcurrentIndexOperations data node configuration parameter

    表21.45此表提供maxnoofconcurrentindexoperations数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 8K
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    For queries using a unique hash index, another temporary set of operation records is used during a query's execution phase. This parameter sets the size of that pool of records. Thus, this record is allocated only while executing a part of a query. As soon as this part has been executed, the record is released. The state needed to handle aborts and commits is handled by the normal operation records, where the pool size is set by the parameter MaxNoOfConcurrentOperations.

    对于使用唯一散列索引的查询,在查询的执行阶段将使用另一组临时操作记录。此参数设置该记录池的大小。因此,只有在执行查询的一部分时才分配此记录。此部分一执行,记录即被发布。处理中止和提交所需的状态由正常操作记录处理,其中池大小由参数maxNoofConcurrentOperations设置。

    The default value of this parameter is 8192. Only in rare cases of extremely high parallelism using unique hash indexes should it be necessary to increase this value. Using a smaller value is possible and can save memory if the DBA is certain that a high degree of parallelism is not required for the cluster.

    此参数的默认值为8192。只有在极少数使用唯一哈希索引的高度并行的情况下,才有必要增加此值。如果dba确定集群不需要高度并行,则可以使用较小的值并节省内存。

  • MaxNoOfFiredTriggers

    MaxNoOffiredTriggers触发器

    Table 21.46 This table provides type and value information for the MaxNoOfFiredTriggers data node configuration parameter

    表21.46此表提供maxNoOffiredTriggers数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 4000
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    The default value of MaxNoOfFiredTriggers is 4000, which is sufficient for most situations. In some cases it can even be decreased if the DBA feels certain the need for parallelism in the cluster is not high.

    maxNoOffiredTriggers的默认值是4000,这对于大多数情况都足够了。在某些情况下,如果dba认为集群中对并行性的需求不高,那么它甚至可以减少。

    A record is created when an operation is performed that affects a unique hash index. Inserting or deleting a record in a table with unique hash indexes or updating a column that is part of a unique hash index fires an insert or a delete in the index table. The resulting record is used to represent this index table operation while waiting for the original operation that fired it to complete. This operation is short-lived but can still require a large number of records in its pool for situations with many parallel write operations on a base table containing a set of unique hash indexes.

    执行影响唯一散列索引的操作时创建记录。在具有唯一散列索引的表中插入或删除记录,或更新属于唯一散列索引的列,将在索引表中触发插入或删除操作。结果记录用于表示此索引表操作,同时等待触发它的原始操作完成。此操作是短暂的,但对于包含一组唯一哈希索引的基表上有许多并行写操作的情况,它的池中仍然需要大量记录。

  • TransactionBufferMemory

    事务缓冲存储器

    Table 21.47 This table provides type and value information for the TransactionBufferMemory data node configuration parameter

    表21.47此表提供TransactionBufferMemory数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 1M
    Range 1K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    The memory affected by this parameter is used for tracking operations fired when updating index tables and reading unique indexes. This memory is used to store the key and column information for these operations. It is only very rarely that the value for this parameter needs to be altered from the default.

    受此参数影响的内存用于跟踪更新索引表和读取唯一索引时触发的操作。此内存用于存储这些操作的键和列信息。很少需要更改此参数的默认值。

    The default value for TransactionBufferMemory is 1MB.

    TransactionBufferMemory的默认值为1MB。

    Normal read and write operations use a similar buffer, whose usage is even more short-lived. The compile-time parameter ZATTRBUF_FILESIZE (found in ndb/src/kernel/blocks/Dbtc/Dbtc.hpp) set to 4000 × 128 bytes (500KB). A similar buffer for key information, ZDATABUF_FILESIZE (also in Dbtc.hpp) contains 4000 × 16 = 62.5KB of buffer space. Dbtc is the module that handles transaction coordination.

    普通的读写操作使用类似的缓冲区,其使用寿命甚至更短。编译时参数zattrbuf_filesize(可在ndb/src/kernel/blocks/dbtc/dbtc.hpp中找到)设置为4000×128字节(500KB)。zdatabuf_filesize(也在dbtc.hpp中)是一个类似的密钥信息缓冲区,它包含4000×16=62.5kb的缓冲空间。dbtc是处理事务协调的模块。

Scans and buffering.  There are additional [ndbd] parameters in the Dblqh module (in ndb/src/kernel/blocks/Dblqh/Dblqh.hpp) that affect reads and updates. These include ZATTRINBUF_FILESIZE, set by default to 10000 × 128 bytes (1250KB) and ZDATABUF_FILE_SIZE, set by default to 10000*16 bytes (roughly 156KB) of buffer space. To date, there have been neither any reports from users nor any results from our own extensive tests suggesting that either of these compile-time limits should be increased.

扫描和缓冲。dblqh模块(在ndb/src/kernel/blocks/dblqh/dblqh.hpp中)中还有其他[ndbd]参数会影响读取和更新。其中包括zattrinbuf_文件大小,默认设置为10000×128字节(1250kb)和zdatabuf_文件大小,默认设置为10000×16字节(约156kb)的缓冲空间。到目前为止,还没有来自用户的任何报告,也没有来自我们自己的广泛测试的任何结果表明,这些编译时间限制中的任何一个都应该增加。

  • BatchSizePerLocalScan

    BatchSizePerLocalScan公司

    Table 21.48 This table provides type and value information for the BatchSizePerLocalScan data node configuration parameter

    表21.48此表提供BatchSizePerLocalScan数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 256
    Range 1 - 992
    Restart Type N

    This parameter is used to calculate the number of lock records used to handle concurrent scan operations.

    此参数用于计算用于处理并发扫描操作的锁记录数。

    BatchSizePerLocalScan has a strong connection to the BatchSize defined in the SQL nodes.

    BatchSizePerLocalScan与SQL节点中定义的BatchSize具有强连接。

  • LongMessageBuffer

    长消息缓冲区

    Table 21.49 This table provides type and value information for the LongMessageBuffer data node configuration parameter

    表21.49此表提供longMessageBuffer数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 64M
    Range 512K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This is an internal buffer used for passing messages within individual nodes and between nodes. The default is 64MB.

    这是一个内部缓冲区,用于在单个节点内和节点之间传递消息。默认为64MB。

    This parameter seldom needs to be changed from the default.

    此参数很少需要从默认值更改。

  • MaxFKBuildBatchSize

    最大BuildBatchSize

    Table 21.50 This table provides type and value information for the MaxFKBuildBatchSize data node configuration parameter

    表21.50此表提供MaxFkBuildBatchSize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.4
    Type or units integer
    Default 64
    Range 16 - 512
    Restart Type S

    Maximum scan batch size used for building foreign keys. Increasing the value set for this parameter may speed up building of foreign key builds at the expense of greater impact to ongoing traffic.

    用于构建外键的最大扫描批大小。增加此参数的设置值可能会加快外键生成的速度,但会对正在进行的流量造成更大的影响。

    Added in NDB 7.6.4

    增加在ndb 7.6.4中

  • MaxNoOfConcurrentScans

    MaxNoofConcurrentScans

    Table 21.51 This table provides type and value information for the MaxNoOfConcurrentScans data node configuration parameter

    表21.51此表提供maxNoofConcurrentScans数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 256
    Range 2 - 500
    Restart Type N

    This parameter is used to control the number of parallel scans that can be performed in the cluster. Each transaction coordinator can handle the number of parallel scans defined for this parameter. Each scan query is performed by scanning all partitions in parallel. Each partition scan uses a scan record in the node where the partition is located, the number of records being the value of this parameter times the number of nodes. The cluster should be able to sustain MaxNoOfConcurrentScans scans concurrently from all nodes in the cluster.

    此参数用于控制可以在群集中执行的并行扫描数。每个事务协调器都可以处理为此参数定义的并行扫描数。每个扫描查询都是通过并行扫描所有分区来执行的。每个分区扫描在分区所在的节点中使用一个扫描记录,记录数是此参数的值乘以节点数。集群应该能够同时支持来自集群中所有节点的maxNoofConcurrentScans扫描。

    Scans are actually performed in two cases. The first of these cases occurs when no hash or ordered indexes exists to handle the query, in which case the query is executed by performing a full table scan. The second case is encountered when there is no hash index to support the query but there is an ordered index. Using the ordered index means executing a parallel range scan. The order is kept on the local partitions only, so it is necessary to perform the index scan on all partitions.

    扫描实际上在两种情况下执行。第一种情况发生在没有散列或有序索引来处理查询时,在这种情况下,通过执行全表扫描来执行查询。当没有散列索引支持查询但有一个有序索引时,会遇到第二种情况。使用有序索引意味着执行并行范围扫描。顺序只在本地分区上保持,因此有必要对所有分区执行索引扫描。

    The default value of MaxNoOfConcurrentScans is 256. The maximum value is 500.

    MaxNoofConcurrentScans的默认值是256。最大值为500。

  • MaxNoOfLocalScans

    MaxNooflocalScans扫描

    Table 21.52 This table provides type and value information for the MaxNoOfLocalScans data node configuration parameter

    表21.52此表提供maxnooflocalscans数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default [see text]
    Range 32 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Specifies the number of local scan records if many scans are not fully parallelized. When the number of local scan records is not provided, it is calculated as shown here:

    指定如果许多扫描未完全并行化,则本地扫描记录的数目。当不提供本地扫描记录数时,计算如下:

    4 * MaxNoOfConcurrentScans * [# data nodes] + 2
    

    The minimum value is 32.

    最小值是32。

  • MaxParallelCopyInstances

    MaxParallelCopyInstances

    Table 21.53 This table provides type and value information for the MaxParallelCopyInstances data node configuration parameter

    表21.53此表提供MaxParallelCopyInstances数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0 - 64
    Restart Type S

    This parameter sets the parallelization used in the copy phase of a node restart or system restart, when a node that is currently just starting is synchronised with a node that already has current data by copying over any changed records from the node that is up to date. Because full parallelism in such cases can lead to overload situations, MaxParallelCopyInstances provides a means to decrease it. This parameter's default value 0. This value means that the effective parallelism is equal to the number of LDM instances in the node just starting as well as the node updating it.

    此参数设置在节点重新启动或系统重新启动的复制阶段中使用的并行化,当当前刚启动的节点与已具有当前数据的节点同步时,通过复制来自节点的任何最新更改记录。因为在这种情况下完全并行可能会导致过载情况,所以maxpallelcopyinstances提供了一种减少这种情况的方法。此参数的默认值为0。这个值意味着有效并行性等于刚开始的节点中的ldm实例数以及更新它的节点数。

  • MaxParallelScansPerFragment

    最大并行扫描碎片

    Table 21.54 This table provides type and value information for the MaxParallelScansPerFragment data node configuration parameter

    表21.54此表提供maxpallelscansperfragment数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 256
    Range 1 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    It is possible to configure the maximum number of parallel scans (TUP scans and TUX scans) allowed before they begin queuing for serial handling. You can increase this to take advantage of any unused CPU when performing large number of scans in parallel and improve their performance.

    在开始串行处理排队之前,可以配置并行扫描(TUP扫描和TUX扫描)的最大数量。在并行执行大量扫描时,可以增加此值以利用任何未使用的CPU,并提高它们的性能。

    The default value for this parameter is 256.

    此参数的默认值为256。

  • MaxReorgBuildBatchSize

    MaxReorgBuildBatchSize

    Table 21.55 This table provides type and value information for the MaxReorgBuildBatchSize data node configuration parameter

    表21.55此表提供maxreorgbuildbatchsize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.4
    Type or units integer
    Default 64
    Range 16 - 512
    Restart Type S

    Maximum scan batch size used for reorganization of table partitions. Increasing the value set for this parameter may speed up reorganization at the expense of greater impact to ongoing traffic.

    用于重组表分区的最大扫描批大小。增加此参数的设置值可能会加快重组,但会对正在进行的流量造成更大的影响。

    Added in NDB 7.6.4

    增加在ndb 7.6.4中

  • MaxUIBuildBatchSize

    最大BuildBatchSize

    Table 21.56 This table provides type and value information for the MaxUIBuildBatchSize data node configuration parameter

    表21.56此表提供MaxuiBuildBatchSize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.4
    Type or units integer
    Default 64
    Range 16 - 512
    Restart Type S

    Maximum scan batch size used for building unique keys. Increasing the value set for this parameter may speed up such builds at the expense of greater impact to ongoing traffic.

    用于构建唯一密钥的最大扫描批量大小。增加此参数的设置值可能会加快此类生成,但会对正在进行的流量造成更大的影响。

    Added in NDB 7.6.4

    增加在ndb 7.6.4中

Memory Allocation

MaxAllocate

最大分配

Table 21.57 This table provides type and value information for the MaxAllocate data node configuration parameter

表21.57此表提供MaxAllocate数据节点配置参数的类型和值信息

Property Value
Version (or later) NDB 7.5.0
Type or units unsigned
Default 32M
Range 1M - 1G
Restart Type N

This is the maximum size of the memory unit to use when allocating memory for tables. In cases where NDB gives Out of memory errors, but it is evident by examining the cluster logs or the output of DUMP 1000 that all available memory has not yet been used, you can increase the value of this parameter (or MaxNoOfTables, or both) to cause NDB to make sufficient memory available.

这是为表分配内存时使用的内存单元的最大大小。如果ndb发出内存不足错误,但通过检查集群日志或dump 1000的输出可以明显看出所有可用内存尚未使用,则可以增加此参数(或maxnooftables,或两者)的值,以使ndb提供足够的可用内存。

Hash Map Size

DefaultHashMapSize

默认哈希映射大小

Table 21.58 This table provides type and value information for the DefaultHashMapSize data node configuration parameter

表21.58此表提供DefaultHashMapSize数据节点配置参数的类型和值信息

Property Value
Version (or later) NDB 7.5.0
Type or units LDM threads
Default 3840
Range 0 - 3840
Restart Type N

The size of the table hash maps used by NDB is configurable using this parameter. DefaultHashMapSize can take any of three possible values (0, 240, 3840). These values and their effects are described in the following table:

ndb使用的表哈希映射的大小可以使用此参数进行配置。DefaultHashMapSize可以接受三个可能的值(0、240、3840)中的任意一个。下表描述了这些值及其影响:

Table 21.59 DefaultHashMapSize parameters

表21.59 DefaultHashMapSize参数

Value Description / Effect
0 Use the lowest value set, if any, for this parameter among all data nodes and API nodes in the cluster; if it is not set on any data or API node, use the default value.
240 Original hash map size (used by default in all NDB Cluster releases prior to NDB 7.2.7)
3840 Larger hash map size (used by default beginning with NDB 7.2.7)

The original intended use for this parameter was to facilitate upgrades and especially downgrades to and from very old releases with differing default hash map sizes. This is not an issue when upgrading from NDB Cluster 7.4 to NDB Cluster 7.5.

此参数最初的用途是方便升级,特别是在具有不同默认哈希映射大小的非常旧的版本之间进行降级。从ndb cluster 7.4升级到ndb cluster 7.5时,这不是问题。

Logging and checkpointing.  The following [ndbd] parameters control log and checkpoint behavior.

日志记录和检查点。以下[ndbd]参数控制日志和检查点行为。

  • FragmentLogFileSize

    碎片日志文件大小

    Table 21.60 This table provides type and value information for the FragmentLogFileSize data node configuration parameter

    表21.60此表提供FragmentLogFileSize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 16M
    Range 4M - 1G
    Restart Type IN

    Setting this parameter enables you to control directly the size of redo log files. This can be useful in situations when NDB Cluster is operating under a high load and it is unable to close fragment log files quickly enough before attempting to open new ones (only 2 fragment log files can be open at one time); increasing the size of the fragment log files gives the cluster more time before having to open each new fragment log file. The default value for this parameter is 16M.

    通过设置此参数,可以直接控制重做日志文件的大小。当ndb集群在高负载下运行,并且在尝试打开新的碎片日志文件之前无法足够快地关闭碎片日志文件(一次只能打开2个碎片日志文件)时,这一点非常有用;增加片段日志文件的大小可以让集群有更多的时间,然后才能打开每个新的片段日志文件。此参数的默认值为16m。

    For more information about fragment log files, see the description for NoOfFragmentLogFiles.

    有关片段日志文件的详细信息,请参阅noofframgentlogfiles的说明。

  • InitialNoOfOpenFiles

    初始化noofopenfiles

    Table 21.61 This table provides type and value information for the InitialNoOfOpenFiles data node configuration parameter

    表21.61此表提供initialnoofopenfiles数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units files
    Default 27
    Range 20 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter sets the initial number of internal threads to allocate for open files.

    此参数设置要为打开的文件分配的内部线程的初始数目。

    The default value is 27.

    默认值为27。

  • InitFragmentLogFiles

    初始化碎片日志文件

    Table 21.62 This table provides type and value information for the InitFragmentLogFiles data node configuration parameter

    表21.62此表提供initfragmentlogfiles数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units [see values]
    Default SPARSE
    Range SPARSE, FULL
    Restart Type IN

    By default, fragment log files are created sparsely when performing an initial start of a data node—that is, depending on the operating system and file system in use, not all bytes are necessarily written to disk. However, it is possible to override this behavior and force all bytes to be written, regardless of the platform and file system type being used, by means of this parameter. InitFragmentLogFiles takes either of two values:

    默认情况下,片段日志文件在执行数据节点的初始启动时创建得很稀疏,也就是说,根据使用的操作系统和文件系统,并非所有字节都必须写入磁盘。但是,可以通过此参数重写此行为并强制写入所有字节,而不管所使用的平台和文件系统类型如何。initfragmentlogfiles接受两个值之一:

    • SPARSE. Fragment log files are created sparsely. This is the default value.

      稀疏的片段日志文件是稀疏创建的。这是默认值。

    • FULL. Force all bytes of the fragment log file to be written to disk.

      已满。强制将片段日志文件的所有字节写入磁盘。

    Depending on your operating system and file system, setting InitFragmentLogFiles=FULL may help eliminate I/O errors on writes to the REDO log.

    根据操作系统和文件系统的不同,设置initfragmentlogfiles=full可能有助于消除写入重做日志时的I/O错误。

  • EnablePartialLcp

    启用部分CP

    Table 21.63 This table provides type and value information for the EnablePartialLcp data node configuration parameter

    表21.63此表提供enablepartiallcp数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.4
    Type or units boolean
    Default true
    Range ...
    Restart Type N

    When true, enable partial local checkpoints: This means that each LCP records only part of the full database, plus any records containing rows changed since the last LCP; if no rows have changed, the LCP updates only the LCP control file and does not update any data files.

    如果为true,则启用部分本地检查点:这意味着每个lcp只记录完整数据库的一部分,加上包含自上次lcp以来更改的行的任何记录;如果没有更改任何行,则lcp只更新lcp控制文件,而不更新任何数据文件。

    If EnablePartialLcp is disabled (false), each LCP uses only a single file and writes a full checkpoint; this requires the least amount of disk space for LCPs, but increases the write load for each LCP. The default value is enabled (true). The proportion of space used by partiaL LCPS can be modified by the setting for the RecoveryWork configuration parameter.

    如果enablepartiallcp被禁用(false),则每个lcp只使用一个文件并写入一个完整的检查点;这要求lcp的磁盘空间最少,但会增加每个lcp的写入负载。默认值为enabled(true)。部分LCP使用的空间比例可以通过设置RecoveryWork配置参数进行修改。

    In NDB 7.6.7 and later, setting this parameter to false also disables the calculation of disk write speed used by the adaptive LCP control mechanism.

    在ndb 7.6.7及更高版本中,将此参数设置为false也会禁用自适应lcp控制机制所使用的磁盘写入速度的计算。

  • LcpScanProgressTimeout

    lcpscanprogresstimeout

    Table 21.64 This table provides type and value information for the LcpScanProgressTimeout data node configuration parameter

    表21.64此表提供lcpscanprogresstimeout数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units second
    Default 60
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    A local checkpoint fragment scan watchdog checks periodically for no progress in each fragment scan performed as part of a local checkpoint, and shuts down the node if there is no progress after a given amount of time has elapsed. This interval can be set using the LcpScanProgressTimeout data node configuration parameter, which sets the maximum time for which the local checkpoint can be stalled before the LCP fragment scan watchdog shuts down the node.

    本地检查点片段扫描监视程序定期检查作为本地检查点一部分执行的每个片段扫描中是否没有任何进展,如果在给定时间后没有进展,则关闭节点。可以使用LCPSCAN PrimeSistMeOutDead数据节点配置参数设置该间隔,该参数设置本地检查点在LCP片段扫描看门狗关闭节点之前停止的最大时间。

    The default value is 60 seconds (providing compatibility with previous releases). Setting this parameter to 0 disables the LCP fragment scan watchdog altogether.

    默认值为60秒(提供与早期版本的兼容性)。将此参数设置为0将完全禁用LCP片段扫描监视器。

  • MaxNoOfOpenFiles

    maxnoofopenfiles文件

    Table 21.65 This table provides type and value information for the MaxNoOfOpenFiles data node configuration parameter

    表21.65此表提供maxnoofopenfiles数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 20 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter sets a ceiling on how many internal threads to allocate for open files. Any situation requiring a change in this parameter should be reported as a bug.

    此参数设置为打开的文件分配多少内部线程的上限。任何需要更改此参数的情况都应报告为错误。

    The default value is 0. However, the minimum value to which this parameter can be set is 20.

    默认值为0。但是,可以设置此参数的最小值是20。

  • MaxNoOfSavedMessages

    MaxNoofSavedMessages

    Table 21.66 This table provides type and value information for the MaxNoOfSavedMessages data node configuration parameter

    表21.66此表提供maxNoofSavedMessages数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 25
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter sets the maximum number of errors written in the error log as well as the maximum number of trace files that are kept before overwriting the existing ones. Trace files are generated when, for whatever reason, the node crashes.

    此参数设置在错误日志中写入的最大错误数以及在重写现有文件之前保留的跟踪文件的最大数量。无论出于何种原因,当节点崩溃时,都会生成跟踪文件。

    The default is 25, which sets these maximums to 25 error messages and 25 trace files.

    默认值是25,它将这些最大值设置为25个错误消息和25个跟踪文件。

  • MaxLCPStartDelay

    最大合作伙伴延迟

    Table 21.67 This table provides type and value information for the MaxLCPStartDelay data node configuration parameter

    表21.67此表提供MaxLCPStartDelay数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units seconds
    Default 0
    Range 0 - 600
    Restart Type N

    In parallel data node recovery, only table data is actually copied and synchronized in parallel; synchronization of metadata such as dictionary and checkpoint information is done in a serial fashion. In addition, recovery of dictionary and checkpoint information cannot be executed in parallel with performing of local checkpoints. This means that, when starting or restarting many data nodes concurrently, data nodes may be forced to wait while a local checkpoint is performed, which can result in longer node recovery times.

    在并行数据节点恢复中,实际上只有表数据是并行复制和同步的;字典和检查点信息等元数据的同步是以串行方式完成的。此外,字典和检查点信息的恢复不能与本地检查点的执行并行执行。这意味着,当同时启动或重新启动多个数据节点时,可能会强制数据节点在执行本地检查点时等待,这会导致节点恢复时间更长。

    It is possible to force a delay in the local checkpoint to permit more (and possibly all) data nodes to complete metadata synchronization; once each data node's metadata synchronization is complete, all of the data nodes can recover table data in parallel, even while the local checkpoint is being executed. To force such a delay, set MaxLCPStartDelay, which determines the number of seconds the cluster can wait to begin a local checkpoint while data nodes continue to synchronize metadata. This parameter should be set in the [ndbd default] section of the config.ini file, so that it is the same for all data nodes. The maximum value is 600; the default is 0.

    可以强制本地检查点延迟以允许更多(可能是所有)数据节点完成元数据同步;一旦每个数据节点的元数据同步完成,所有数据节点都可以并行恢复表数据,即使本地检查点正在执行。若要强制延迟,请设置maxlcpstartdelay,该值确定数据节点继续同步元数据时群集可以等待的开始本地检查点的秒数。此参数应在config.ini文件的[ndbd default]部分中设置,以便对所有数据节点都是相同的。最大值为600,缺省值为0。

  • NoOfFragmentLogFiles

    noofframgentlogfiles文件

    Table 21.68 This table provides type and value information for the NoOfFragmentLogFiles data node configuration parameter

    表21.68此表提供noofframgentlogfiles数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 16
    Range 3 - 4294967039 (0xFFFFFEFF)
    Restart Type IN

    This parameter sets the number of REDO log files for the node, and thus the amount of space allocated to REDO logging. Because the REDO log files are organized in a ring, it is extremely important that the first and last log files in the set (sometimes referred to as the head and tail log files, respectively) do not meet. When these approach one another too closely, the node begins aborting all transactions encompassing updates due to a lack of room for new log records.

    此参数设置节点的重做日志文件数,从而设置分配给重做日志的空间量。由于重做日志文件是按环组织的,因此集合中的第一个和最后一个日志文件(有时分别称为“head”和“tail”日志文件)不能满足要求是非常重要的。当这些方法彼此接近得太近时,节点开始中止包含更新的所有事务,因为没有空间容纳新的日志记录。

    A REDO log record is not removed until both required local checkpoints have been completed since that log record was inserted. Checkpointing frequency is determined by its own set of configuration parameters discussed elsewhere in this chapter.

    自插入重做日志记录以来,在完成两个必需的本地检查点之前,不会删除该日志记录。检查点频率由本章其他部分讨论的它自己的一组配置参数决定。

    The default parameter value is 16, which by default means 16 sets of 4 16MB files for a total of 1024MB. The size of the individual log files is configurable using the FragmentLogFileSize parameter. In scenarios requiring a great many updates, the value for NoOfFragmentLogFiles may need to be set as high as 300 or even higher to provide sufficient space for REDO logs.

    默认参数值为16,默认为16组4 16MB文件,共1024MB。可以使用fragmentlogfilesize参数配置单个日志文件的大小。在需要大量更新的场景中,noofframgentlogfiles的值可能需要设置为300甚至更高,以便为重做日志提供足够的空间。

    If the checkpointing is slow and there are so many writes to the database that the log files are full and the log tail cannot be cut without jeopardizing recovery, all updating transactions are aborted with internal error code 410 (Out of log file space temporarily). This condition prevails until a checkpoint has completed and the log tail can be moved forward.

    如果检查点操作很慢,并且对数据库的写入太多,以致日志文件已满,并且无法在不影响恢复的情况下剪切日志尾部,则所有更新事务都将中止,内部错误代码为410(日志文件空间暂时不足)。在检查点完成并且日志尾部可以向前移动之前,此条件将一直有效。

    Important

    This parameter cannot be changed on the fly; you must restart the node using --initial. If you wish to change this value for all data nodes in a running cluster, you can do so using a rolling node restart (using --initial when starting each data node).

    不能“动态”更改此参数;必须使用--initial重新启动节点。如果希望更改正在运行的集群中所有数据节点的此值,可以使用滚动节点重新启动(在启动每个数据节点时使用--initial)来执行此操作。

  • RecoveryWork

    恢复工作

    Table 21.69 This table provides type and value information for the RecoveryWork data node configuration parameter

    表21.69此表提供了RecoveryWork数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.4
    Type or units integer
    Default 50
    Range 25 - 100
    Restart Type N
    Version (or later) NDB 7.6.5
    Type or units integer
    Default 60
    Range 25 - 100
    Restart Type N

    Percentage of storage overhead for LCP files. This parameter has an effect only when EnablePartialLcp is true, that is, only when partial local checkpoints are enabled. A higher value means:

    LCP文件的存储开销百分比。仅当enablepartiallcp为true时(即仅当启用部分本地检查点时),此参数才有效。较高的值意味着:

    • Fewer records are written for each LCP, LCPs use more space

      每个lcp写入的记录更少,lcp使用的空间更多

    • More work is needed during restarts

      重新启动时需要做更多的工作

    A lower value for RecoveryWork means:

    RecoveryWork的较低值意味着:

    • More records are written during each LCP, but LCPs require less space on disk.

      在每个lcp期间写入更多记录,但lcp需要的磁盘空间较少。

    • Less work during restart and thus faster restarts, at the expense of more work during normal operations

      在重新启动期间减少工作,从而加快重新启动,而在正常操作期间则要花费更多的工作

    For example, setting RecoveryWork to 60 means that the total size of an LCP is roughly 1 + 0.6 = 1.6 times the size of the data to be checkpointed. This means that 60% more work is required during the restore phase of a restart compared to the work done during a restart that uses full checkpoints. (This is more than compensated for during other phases of the restart such that the restart as a whole is still faster when using partial LCPs than when using full LCPs.) In order not to fill up the redo log, it is necessary to write at 1 + (1 / RecoveryWork) times the rate of data changes during checkpoints—thus, when RecoveryWork = 60, it is necessary to write at approximately 1 + (1 / 0.6 ) = 2.67 times the change rate. In other words, if changes are being written at 10 MByte per second, the checkpoint needs to be written at roughly 26.7 MByte per second.

    例如,将recoverywork设置为60意味着lcp的总大小大约是要检查的数据大小的1+0.6=1.6倍。这意味着,与使用完整检查点的重新启动期间所做的工作相比,在重新启动期间所需的工作要多60%。(这在重新启动的其他阶段得到了充分的补偿,因此在使用部分LCP时重新启动整体上比使用完整LCP时更快。)为了不填满重做日志,必须以1+(1/RecoveryWork)乘以检查点期间的数据更改速率写入,因此,当RecoveryWork=60时,有必要写在大约1 +(1 / 0.6)=2.67倍的变化率。换句话说,如果更改以每秒10兆字节的速度写入,则检查点需要以每秒26.7兆字节的速度写入。

    Setting RecoveryWork = 40 means that only 1.4 times the total LCP size is needed (and thus the restore phase takes 10 to 15 percent less time. In this case, the checkpoint write rate is 3.5 times the rate of change.

    设置recoveryWork=40意味着只需要1.4倍的LCP总大小(因此恢复阶段所需的时间减少了10%到15%)。在这种情况下,检查点写入速率是更改速率的3.5倍。

    The NDB source distribution includes a test program for simulating LCPs. lcp_simulator.cc can be found in storage/ndb/src/kernel/blocks/backup/. To compile and run it on Unix platforms, execute the commands shown here:

    ndb源分布包括模拟lcp的测试程序。lcp_simulator.cc可以在storage/ndb/src/kernel/blocks/backup/中找到。要在UNIX平台上编译和运行它,请执行以下命令:

    shell> gcc lcp_simulator.cc
    shell> ./a.out
    

    This program has no dependencies other than stdio.h, and does not require a connection to an NDB cluster or a MySQL server. By default, it simulates 300 LCPs (three sets of 100 LCPs, each consisting of inserts, updates, and deletes, in turn), reporting the size of the LCP after each one. You can alter the simulation by changing the values of recovery_work, insert_work, and delete_work in the source and recompiling. For more information, see the source of the program.

    这个程序除了stdio.h之外没有其他依赖项,并且不需要连接到ndb集群或mysql服务器。默认情况下,它模拟300个lcp(三组100个lcp,依次由插入、更新和删除组成),报告每个lcp之后的lcp大小。可以通过更改源中的恢复工时、插入工时和删除工时的值并重新编译来更改模拟。有关详细信息,请参见程序的源代码。

  • InsertRecoveryWork

    插入恢复工作

    Table 21.70 This table provides type and value information for the InsertRecoveryWork data node configuration parameter

    表21.70此表提供InsertRecoveryWork数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.5
    Type or units integer
    Default 40
    Range 0 - 70
    Restart Type N

    Percentage of RecoveryWork used for inserted rows. A higher value increases the number of writes during a local checkpoint, and decreases the total size of the LCP. A lower value decreases the number of writes during an LCP, but results in more space being used for the LCP, which means that recovery takes longer. This parameter has an effect only when EnablePartialLcp is true, that is, only when partial local checkpoints are enabled.

    用于插入行的RecoveryWork的百分比。较高的值会增加本地检查点期间的写入次数,并减小LCP的总大小。较低的值会减少lcp期间的写入次数,但会导致lcp使用更多的空间,这意味着恢复需要更长的时间。仅当enablepartiallcp为true时(即仅当启用部分本地检查点时),此参数才有效。

  • EnableRedoControl

    启用redocontrol

    Table 21.71 This table provides type and value information for the EnableRedoControl data node configuration parameter

    表21.71此表提供了enableredocontrol数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.7
    Type or units boolean
    Default false
    Range ...
    Restart Type N

    Enable adaptive checkpointing speed for controlling redo log usage. Set to false to disable (the default). Setting EnablePartialLcp to false also disables the adaptive calculation.

    启用自适应检查点速度以控制重做日志的使用。设置为false将禁用(默认设置)。将enablepartiallcp设置为false也会禁用自适应计算。

    When enabled, EnableRedoControl allows the data nodes greater flexibility with regard to the rate at which they write LCPs to disk. More specifically, enabling this parameter means that higher write rates can be employed, so that LCPs can complete and Redo logs be trimmed more quickly, thereby reducing recovery time and disk space requirements. This functionality allows data nodes to make better use of the higher rate of I/O and greater bandwidth available from modern solid-state storage devices and protocols, such as solid-state drives (SSDs) using Non-Volatile Memory Express (NVMe).

    当启用时,EnabReDeO控件允许数据节点在将LCP写入磁盘的速率方面具有更大的灵活性。更具体地说,启用此参数意味着可以采用更高的写入速率,以便LCP可以更快地完成和修剪重做日志,从而减少恢复时间和磁盘空间需求。此功能允许数据节点更好地利用来自现代固态存储设备和协议的更高的I/O速率和更大的可用带宽,例如使用非易失性存储器快速(NVMe)的固态驱动器(SSD)。

    The parameter currently defaults to false (disabled) due to the fact that NDB is still deployed widely on systems whose I/O or bandwidth is constrained relative to those employing solid-state technology, such as those using conventional hard disks (HDDs). In settings such as these, the EnableRedoControl mechanism can easily cause the I/O subsystem to become saturated, increasing wait times for data node input and output. In particular, this can cause issues with NDB Disk Data tables which have tablespaces or log file groups sharing a constrained IO subsystem with data node LCP and redo log files; such problems potentially include node or cluster failure due to GCP stop errors.

    该参数目前默认为false(禁用),因为与使用固态技术的系统(如使用传统硬盘(hdd))相比,ndb仍广泛部署在i/o或带宽受限的系统上。在这些设置中,enableredocontrol机制很容易导致i/o子系统变得饱和,从而增加数据节点输入和输出的等待时间。尤其是,这可能会导致ndb磁盘数据表出现问题,这些表空间或日志文件组与数据节点lcp和重做日志文件共享受约束的io子系统;此类问题可能包括由于gcp停止错误而导致的节点或群集故障。

Metadata objects.  The next set of [ndbd] parameters defines pool sizes for metadata objects, used to define the maximum number of attributes, tables, indexes, and trigger objects used by indexes, events, and replication between clusters.

元数据对象。下一组[NdBD]参数定义元数据对象的池大小,用于定义索引、事件和集群之间的复制所使用的属性、表、索引和触发器对象的最大数量。

Note

These act merely as suggestions to the cluster, and any that are not specified revert to the default values shown.

这些只是对集群的“建议”,任何未指定的都将还原为显示的默认值。

  • MaxNoOfAttributes

    MaxNoofattributes公司

    Table 21.72 This table provides type and value information for the MaxNoOfAttributes data node configuration parameter

    表21.72此表提供maxnoofattributes数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 1000
    Range 32 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter sets a suggested maximum number of attributes that can be defined in the cluster; like MaxNoOfTables, it is not intended to function as a hard upper limit.

    此参数设置了可在集群中定义的属性的最大数量;如MaxNoOfTables,它不打算用作硬上限。

    (In older NDB Cluster releases, this parameter was sometimes treated as a hard limit for certain operations. This caused problems with NDB Cluster Replication, when it was possible to create more tables than could be replicated, and sometimes led to confusion when it was possible [or not possible, depending on the circumstances] to create more than MaxNoOfAttributes attributes.)

    (在旧的ndb集群版本中,此参数有时被视为某些操作的硬限制。这导致了ndb集群复制的问题,当可能创建的表多于可以复制的表时,有时会导致在可能(或不可能,取决于环境)创建多于maxnoofttributes属性时出现混淆。

    The default value is 1000, with the minimum possible value being 32. The maximum is 4294967039. Each attribute consumes around 200 bytes of storage per node due to the fact that all metadata is fully replicated on the servers.

    默认值为1000,最小可能值为32。最大值为4294967039。由于服务器上的所有元数据都已完全复制,每个属性在每个节点上消耗大约200字节的存储空间。

    When setting MaxNoOfAttributes, it is important to prepare in advance for any ALTER TABLE statements that you might want to perform in the future. This is due to the fact, during the execution of ALTER TABLE on a Cluster table, 3 times the number of attributes as in the original table are used, and a good practice is to permit double this amount. For example, if the NDB Cluster table having the greatest number of attributes (greatest_number_of_attributes) has 100 attributes, a good starting point for the value of MaxNoOfAttributes would be 6 * greatest_number_of_attributes = 600.

    在设置maxnoofattributes时,为将来可能要执行的任何alter table语句提前做好准备是很重要的。这是因为在集群表上执行alter table时,使用的属性数是原始表中的3倍,一个好的做法是允许将此数量加倍。例如,如果具有最大数量属性(最大数量属性)的ndb集群表具有100个属性,那么maxnoofattributes的值的一个好的起点将是6*最大数量属性=600。

    You should also estimate the average number of attributes per table and multiply this by MaxNoOfTables. If this value is larger than the value obtained in the previous paragraph, you should use the larger value instead.

    您还应该估计每个表的平均属性数,并将其乘以maxnooftables。如果此值大于上一段中获得的值,则应改用较大的值。

    Assuming that you can create all desired tables without any problems, you should also verify that this number is sufficient by trying an actual ALTER TABLE after configuring the parameter. If this is not successful, increase MaxNoOfAttributes by another multiple of MaxNoOfTables and test it again.

    假设您可以毫无问题地创建所有所需的表,那么您还应该通过在配置参数后尝试实际的alter table来验证这个数字是否足够。如果这不成功,将maxnoofattributes再增加一倍maxnooftables,然后再次测试它。

  • MaxNoOfTables

    最大值

    Table 21.73 This table provides type and value information for the MaxNoOfTables data node configuration parameter

    表21.73此表提供maxnooftables数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 128
    Range 8 - 20320
    Restart Type N

    A table object is allocated for each table and for each unique hash index in the cluster. This parameter sets a suggested maximum number of table objects for the cluster as a whole; like MaxNoOfAttributes, it is not intended to function as a hard upper limit.

    为每个表和集群中的每个唯一散列索引分配一个表对象。这个参数为集群设置了一个建议的最大数量的表对象;像MaxNoOfAttributes一样,它不打算用作硬上限。

    (In older NDB Cluster releases, this parameter was sometimes treated as a hard limit for certain operations. This caused problems with NDB Cluster Replication, when it was possible to create more tables than could be replicated, and sometimes led to confusion when it was possible [or not possible, depending on the circumstances] to create more than MaxNoOfTables tables.)

    (在旧的ndb集群版本中,此参数有时被视为某些操作的硬限制。这导致了ndb集群复制的问题,当可能创建的表多于可以复制的表时,有时会导致在可能(或不可能,取决于环境)创建的表多于maxnooftables表时出现混乱。

    For each attribute that has a BLOB data type an extra table is used to store most of the BLOB data. These tables also must be taken into account when defining the total number of tables.

    对于具有blob数据类型的每个属性,都会使用一个额外的表来存储大多数blob数据。在定义表的总数时,还必须考虑这些表。

    The default value of this parameter is 128. The minimum is 8 and the maximum is 20320. Each table object consumes approximately 20KB per node.

    此参数的默认值为128。最小值为8,最大值为20320。每个表对象每节点消耗大约20KB。

    Note

    The sum of MaxNoOfTables, MaxNoOfOrderedIndexes, and MaxNoOfUniqueHashIndexes must not exceed 232 − 2 (4294967294).

    maxnooftables、maxnooforderedindexes和maxnoofundiquehashindexes的总和不得超过232-2(4294967294)。

  • MaxNoOfOrderedIndexes

    麦克斯诺福德指数

    Table 21.74 This table provides type and value information for the MaxNoOfOrderedIndexes data node configuration parameter

    表21.74此表提供maxnooforderedindexes数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 128
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    For each ordered index in the cluster, an object is allocated describing what is being indexed and its storage segments. By default, each index so defined also defines an ordered index. Each unique index and primary key has both an ordered index and a hash index. MaxNoOfOrderedIndexes sets the total number of ordered indexes that can be in use in the system at any one time.

    对于集群中的每个有序索引,都会分配一个对象,描述要索引的内容及其存储段。默认情况下,这样定义的每个索引还定义了一个有序索引。每个唯一索引和主键都有一个有序索引和一个哈希索引。maxnooforderedindexes设置一次可以在系统中使用的有序索引总数。

    The default value of this parameter is 128. Each index object consumes approximately 10KB of data per node.

    此参数的默认值为128。每个索引对象消耗每个节点大约10KB的数据。

    Note

    The sum of MaxNoOfTables, MaxNoOfOrderedIndexes, and MaxNoOfUniqueHashIndexes must not exceed 232 − 2 (4294967294).

    maxnooftables、maxnooforderedindexes和maxnoofundiquehashindexes的总和不得超过232-2(4294967294)。

  • MaxNoOfUniqueHashIndexes

    马克斯努福尼克酒店

    Table 21.75 This table provides type and value information for the MaxNoOfUniqueHashIndexes data node configuration parameter

    表21.75此表提供maxNoOfuniqueHashindexes数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 64
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    For each unique index that is not a primary key, a special table is allocated that maps the unique key to the primary key of the indexed table. By default, an ordered index is also defined for each unique index. To prevent this, you must specify the USING HASH option when defining the unique index.

    对于不是主键的每个唯一索引,将分配一个特殊表,该表将唯一键映射到索引表的主键。默认情况下,还为每个唯一索引定义了有序索引。要防止这种情况,必须在定义唯一索引时指定using hash选项。

    The default value is 64. Each index consumes approximately 15KB per node.

    默认值为64。每个索引消耗大约15KB每个节点。

    Note

    The sum of MaxNoOfTables, MaxNoOfOrderedIndexes, and MaxNoOfUniqueHashIndexes must not exceed 232 − 2 (4294967294).

    maxnooftables、maxnooforderedindexes和maxnoofundiquehashindexes的总和不得超过232-2(4294967294)。

  • MaxNoOfTriggers

    Maxnooftriggers公司

    Table 21.76 This table provides type and value information for the MaxNoOfTriggers data node configuration parameter

    表21.76此表提供maxnooftriggers数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 768
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Internal update, insert, and delete triggers are allocated for each unique hash index. (This means that three triggers are created for each unique hash index.) However, an ordered index requires only a single trigger object. Backups also use three trigger objects for each normal table in the cluster.

    为每个唯一的散列索引分配内部更新、插入和删除触发器。(这意味着为每个唯一的散列索引创建三个触发器。)但是,有序索引只需要一个触发器对象。备份还为群集中的每个普通表使用三个触发器对象。

    Replication between clusters also makes use of internal triggers.

    集群之间的复制也使用内部触发器。

    This parameter sets the maximum number of trigger objects in the cluster.

    此参数设置群集中触发器对象的最大数量。

    The default value is 768.

    默认值为768。

  • MaxNoOfSubscriptions

    最大订阅数

    Table 21.77 This table provides type and value information for the MaxNoOfSubscriptions data node configuration parameter

    表21.77此表提供maxnoofsubscriptions数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Each NDB table in an NDB Cluster requires a subscription in the NDB kernel. For some NDB API applications, it may be necessary or desirable to change this parameter. However, for normal usage with MySQL servers acting as SQL nodes, there is not any need to do so.

    ndb集群中的每个ndb表都需要在ndb内核中订阅。对于某些ndb api应用程序,可能需要或希望更改此参数。但是,对于作为sql节点的mysql服务器的正常使用,不需要这样做。

    The default value for MaxNoOfSubscriptions is 0, which is treated as equal to MaxNoOfTables. Each subscription consumes 108 bytes.

    maxnoofsubscriptions的默认值是0,它被视为等于maxnooftables。每个订阅占用108字节。

  • MaxNoOfSubscribers

    最大订阅数

    Table 21.78 This table provides type and value information for the MaxNoOfSubscribers data node configuration parameter

    表21.78此表提供maxNoofSubscribers数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter is of interest only when using NDB Cluster Replication. The default value is 0, which is treated as 2 * MaxNoOfTables; that is, there is one subscription per NDB table for each of two MySQL servers (one acting as the replication master and the other as the slave). Each subscriber uses 16 bytes of memory.

    此参数仅在使用ndb群集复制时才有意义。默认值为0,视为2*maxnooftables;也就是说,对于两个mysql服务器(一个充当复制主服务器,另一个充当从服务器),每个ndb表都有一个订阅。每个订阅服务器使用16字节的内存。

    When using circular replication, multi-master replication, and other replication setups involving more than 2 MySQL servers, you should increase this parameter to the number of mysqld processes included in replication (this is often, but not always, the same as the number of clusters). For example, if you have a circular replication setup using three NDB Cluster s, with one mysqld attached to each cluster, and each of these mysqld processes acts as a master and as a slave, you should set MaxNoOfSubscribers equal to 3 * MaxNoOfTables.

    当使用循环复制、多主复制和涉及2台以上mysql服务器的其他复制设置时,应将此参数增加到复制中包含的mysqld进程数(这通常与群集数相同,但并不总是相同)。例如,如果有一个使用三个ndb集群的循环复制设置,每个集群连接一个mysqld,并且这些mysqld进程中的每个进程都充当主进程和从进程,则应将maxnoofsubscribers设置为3*maxnooftables。

    For more information, see Section 21.6, “NDB Cluster Replication”.

    有关更多信息,请参阅21.6节“NDB群集复制”。

  • MaxNoOfConcurrentSubOperations

    MaxNoofConcurrentSuboperations系列

    Table 21.79 This table provides type and value information for the MaxNoOfConcurrentSubOperations data node configuration parameter

    表21.79此表提供maxNoofConcurrentSuboperations数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 256
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter sets a ceiling on the number of operations that can be performed by all API nodes in the cluster at one time. The default value (256) is sufficient for normal operations, and might need to be adjusted only in scenarios where there are a great many API nodes each performing a high volume of operations concurrently.

    此参数设置集群中所有API节点一次可以执行的操作数上限。默认值(256)对于正常操作来说已经足够了,并且可能只需要在有大量api节点同时执行大量操作的情况下进行调整。

Boolean parameters.  The behavior of data nodes is also affected by a set of [ndbd] parameters taking on boolean values. These parameters can each be specified as TRUE by setting them equal to 1 or Y, and as FALSE by setting them equal to 0 or N.

布尔参数。数据节点的行为也受一组具有布尔值的[ndbd]参数的影响。通过将这些参数设置为1或y,可以将它们分别指定为true;通过将它们设置为0或n,可以将它们指定为false。

  • CompressedBackup

    压缩备份

    Table 21.80 This table provides type and value information for the CompressedBackup data node configuration parameter

    表21.80此表提供CompressedBackup数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default false
    Range true, false
    Restart Type N

    Enabling this parameter causes backup files to be compressed. The compression used is equivalent to gzip --fast, and can save 50% or more of the space required on the data node to store uncompressed backup files. Compressed backups can be enabled for individual data nodes, or for all data nodes (by setting this parameter in the [ndbd default] section of the config.ini file).

    启用此参数将导致压缩备份文件。所使用的压缩相当于gzip-fast,可以节省数据节点上存储未压缩备份文件所需的50%或更多空间。可以为单个数据节点或所有数据节点启用压缩备份(通过在config.ini文件的[ndbd default]部分中设置此参数)。

    Important

    You cannot restore a compressed backup to a cluster running a MySQL version that does not support this feature.

    无法将压缩备份还原到运行不支持此功能的MySQL版本的群集。

    The default value is 0 (disabled).

    默认值为0(禁用)。

  • CompressedLCP

    压缩DLCP

    Table 21.81 This table provides type and value information for the CompressedLCP data node configuration parameter

    表21.81此表提供CompressedLCP数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default false
    Range true, false
    Restart Type N

    Setting this parameter to 1 causes local checkpoint files to be compressed. The compression used is equivalent to gzip --fast, and can save 50% or more of the space required on the data node to store uncompressed checkpoint files. Compressed LCPs can be enabled for individual data nodes, or for all data nodes (by setting this parameter in the [ndbd default] section of the config.ini file).

    将此参数设置为1将导致压缩本地检查点文件。所使用的压缩相当于gzip-fast,可以节省数据节点上存储未压缩检查点文件所需的50%或更多空间。可以为单个数据节点或所有数据节点启用压缩LCP(通过在config.ini文件的[ndbd default]部分中设置此参数)。

    Important

    You cannot restore a compressed local checkpoint to a cluster running a MySQL version that does not support this feature.

    无法将压缩的本地检查点还原到运行不支持此功能的MySQL版本的群集。

    The default value is 0 (disabled).

    默认值为0(禁用)。

  • CrashOnCorruptedTuple

    撞坏了两个

    Table 21.82 This table provides type and value information for the CrashOnCorruptedTuple data node configuration parameter

    表21.82此表提供了CrashOnCorruptedDuple数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default true
    Range true, false
    Restart Type S

    When this parameter is enabled, it forces a data node to shut down whenever it encounters a corrupted tuple. In NDB 7.5, it is enabled by default.

    启用此参数后,它将强制数据节点在遇到损坏的元组时关闭。在ndb 7.5中,它在默认情况下是启用的。

  • Diskless

    无盘

    Table 21.83 This table provides type and value information for the Diskless data node configuration parameter

    表21.83此表提供无盘数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units true|false (1|0)
    Default false
    Range true, false
    Restart Type IS

    It is possible to specify NDB Cluster tables as diskless, meaning that tables are not checkpointed to disk and that no logging occurs. Such tables exist only in main memory. A consequence of using diskless tables is that neither the tables nor the records in those tables survive a crash. However, when operating in diskless mode, it is possible to run ndbd on a diskless computer.

    可以将ndb集群表指定为无磁盘,这意味着表没有被检查到磁盘,并且不会发生日志记录。这样的表只存在于主存储器中。使用无盘表的一个结果是,这些表或表中的记录在崩溃后都无法存活。但是,在无盘模式下操作时,可以在无盘计算机上运行ndbd。

    Important

    This feature causes the entire cluster to operate in diskless mode.

    此功能使整个群集以无盘模式运行。

    When this feature is enabled, Cluster online backup is disabled. In addition, a partial start of the cluster is not possible.

    启用此功能后,将禁用群集联机备份。此外,无法部分启动群集。

    Diskless is disabled by default.

    默认情况下禁用无盘。

  • LateAlloc

    晚成

    Table 21.84 This table provides type and value information for the LateAlloc data node configuration parameter

    表21.84此表提供LateAlloc数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 1
    Range 0 - 1
    Restart Type N

    Allocate memory for this data node after a connection to the management server has been established. Enabled by default.

    在建立与管理服务器的连接后,为此数据节点分配内存。默认情况下启用。

  • LockPagesInMainMemory

    锁定页主内存

    Table 21.85 This table provides type and value information for the LockPagesInMainMemory data node configuration parameter

    表21.85此表提供了LockPagesInMainMemory数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 0
    Range 0 - 2
    Restart Type N

    For a number of operating systems, including Solaris and Linux, it is possible to lock a process into memory and so avoid any swapping to disk. This can be used to help guarantee the cluster's real-time characteristics.

    对于许多操作系统,包括solaris和linux,可以将进程锁定到内存中,从而避免任何与磁盘的交换。这有助于保证集群的实时性。

    This parameter takes one of the integer values 0, 1, or 2, which act as shown in the following list:

    此参数采用整数值0、1或2中的一个,其作用如下表所示:

    • 0: Disables locking. This is the default value.

      0:禁用锁定。这是默认值。

    • 1: Performs the lock after allocating memory for the process.

      1:为进程分配内存后执行锁定。

    • 2: Performs the lock before memory for the process is allocated.

      2:在分配进程的内存之前执行锁定。

    If the operating system is not configured to permit unprivileged users to lock pages, then the data node process making use of this parameter may have to be run as system root. (LockPagesInMainMemory uses the mlockall function. From Linux kernel 2.6.9, unprivileged users can lock memory as limited by max locked memory. For more information, see ulimit -l and http://linux.die.net/man/2/mlock).

    如果操作系统未配置为允许无权限用户锁定页面,则使用此参数的数据节点进程可能必须作为系统根运行。(lockPagesInMainMemory使用mlockall函数。从Linux2.6.9内核,没有权限的用户可以锁定最大锁定内存限制的内存。有关更多信息,请参见ulimit-l和http://linux.die.net/man/2/mlock)。

    Note

    In older NDB Cluster releases, this parameter was a Boolean. 0 or false was the default setting, and disabled locking. 1 or true enabled locking of the process after its memory was allocated. NDB Cluster 7.5 treats true or false for the value of this parameter as an error.

    在旧的ndb集群版本中,此参数是布尔值。默认设置为0或false,并禁用锁定。1或true在分配进程的内存后启用对其的锁定。ndb cluster 7.5将此参数值的true或false视为错误。

    Important

    Beginning with glibc 2.10, glibc uses per-thread arenas to reduce lock contention on a shared pool, which consumes real memory. In general, a data node process does not need per-thread arenas, since it does not perform any memory allocation after startup. (This difference in allocators does not appear to affect performance significantly.)

    从glibc 2.10开始,glibc使用每线程arenas来减少共享池上的锁争用,这会消耗实际内存。通常,数据节点进程不需要每个线程的arenas,因为它在启动后不执行任何内存分配。(分配器中的这种差异似乎不会显著影响性能。)

    The glibc behavior is intended to be configurable via the MALLOC_ARENA_MAX environment variable, but a bug in this mechanism prior to glibc 2.16 meant that this variable could not be set to less than 8, so that the wasted memory could not be reclaimed. (Bug #15907219; see also http://sourceware.org/bugzilla/show_bug.cgi?id=13137 for more information concerning this issue.)

    glibc行为可以通过malloc_arena_max环境变量配置,但是glibc 2.16之前的机制中的一个错误意味着这个变量不能设置为小于8,因此浪费的内存不能被回收。(bug 15907219;另请参见http://sourceware.org/bugzilla/show_Bug.cgi?id=13137,了解有关此问题的更多信息。)

    One possible workaround for this problem is to use the LD_PRELOAD environment variable to preload a jemalloc memory allocation library to take the place of that supplied with glibc.

    解决此问题的一种可能方法是使用ld_preload环境变量来预加载jemalloc内存分配库,以取代glibc提供的库。

  • ODirect

    直接的

    Table 21.86 This table provides type and value information for the ODirect data node configuration parameter

    表21.86此表提供了odirect数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default false
    Range true, false
    Restart Type N

    Enabling this parameter causes NDB to attempt using O_DIRECT writes for LCP, backups, and redo logs, often lowering kswapd and CPU usage. When using NDB Cluster on Linux, enable ODirect if you are using a 2.6 or later kernel.

    启用此参数会导致ndb尝试对lcp、备份和重做日志使用o_direct写操作,这通常会降低kswapd和cpu的使用率。在Linux上使用ndb集群时,如果使用的是2.6或更高版本的内核,请启用odirect。

    ODirect is disabled by default.

    默认情况下禁用odirect。

  • ODirectSyncFlag

    方向同步标志

    Table 21.87 This table provides type and value information for the ODirectSyncFlag data node configuration parameter

    表21.87此表提供了odirectsyncflag数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.4
    Type or units boolean
    Default false
    Range true, false
    Restart Type N

    When this parameter is enabled, redo log writes are performed such that each completed file system write is handled as a call to fsync. The setting for this parameter is ignored if at least one of the following conditions is true:

    启用此参数时,将执行重做日志写入,以便将每个完成的文件系统写入作为对fsync的调用进行处理。如果至少满足以下条件之一,则忽略此参数的设置:

    • ODirect is not enabled.

      未启用odirect。

    • InitFragmentLogFiles is set to SPARSE.

      InitFragmentLogFiles被设置为稀疏。

    Disabled by default.

    默认情况下禁用。

  • RestartOnErrorInsert

    RestartNerrorInsert餐厅

    Table 21.88 This table provides type and value information for the RestartOnErrorInsert data node configuration parameter

    表21.88此表提供RestartOneRorInsert数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units error code
    Default 2
    Range 0 - 4
    Restart Type N

    This feature is accessible only when building the debug version where it is possible to insert errors in the execution of individual blocks of code as part of testing.

    只有在生成调试版本时,才可以访问此功能,在调试版本中,可以在执行单个代码块时作为测试的一部分插入错误。

    This feature is disabled by default.

    默认情况下禁用此功能。

  • StopOnError

    止损员

    Table 21.89 This table provides type and value information for the StopOnError data node configuration parameter

    表21.89此表提供StopOneRor数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default 1
    Range 0, 1
    Restart Type N

    This parameter specifies whether a data node process should exit or perform an automatic restart when an error condition is encountered.

    此参数指定数据节点进程是否应该退出或在遇到错误条件时执行自动重新启动。

    This parameter's default value is 1; this means that, by default, an error causes the data node process to halt.

    此参数的默认值为1;这意味着,默认情况下,错误会导致数据节点进程停止。

    When an error is encountered and StopOnError is 0, the data node process is restarted.

    当遇到错误且StopOneRor为0时,将重新启动数据节点进程。

    Prior to NDB Cluster 7.5.5, if the data node process exits in an uncontrolled fashion (due, for example, to performing kill -9 on the data node process while performing a query, or to a segmentation fault), and StopOnError is set to 0, the angel process attempts to restart it in exactly the same way as it was started previously—that is, using the same startup options that were employed the last time the node was started. Thus, if the data node process was originally started using the --initial option, it is also restarted with --initial. This means that, in such cases, if the failure occurs on a sufficient number of data nodes in a very short interval, the effect is the same as if you had performed an initial restart of the entire cluster, leading to loss of all data. This issue is resolved in NDB Cluster 7.5.5 and later NDB 7.5 releases (Bug #83510, Bug #24945638).

    在此之前,如果数据节点进程以不受控制的方式退出(例如,在执行查询或执行分段故障时,在数据节点进程上执行杀- 9),而StopOnError设置为0,则天使进程尝试以与前面启动的方式完全相同的方式重新启动它,使用上次启动节点时使用的相同启动选项。因此,如果数据节点进程最初是使用--initial选项启动的,那么它也将使用--initial重新启动。这意味着,在这种情况下,如果故障在很短的时间间隔内发生在足够数量的数据节点上,其效果与执行整个集群的初始重新启动时的效果相同,从而导致所有数据丢失。此问题在ndb cluster 7.5.5和更高版本的ndb 7.5(bug 83510,bug 24945638)中得到解决。

    Users of MySQL Cluster Manager should note that, when StopOnError equals 1, this prevents the MySQL Cluster Manager agent from restarting any data nodes after it has performed its own restart and recovery. See Starting and Stopping the Agent on Linux, for more information.

    mysql cluster manager的用户应该注意,当stoponerror等于1时,这会阻止mysql cluster manager代理在执行自己的重新启动和恢复之后重新启动任何数据节点。有关详细信息,请参阅在Linux上启动和停止代理。

  • UseShm

    使用

    Table 21.90 This table provides type and value information for the UseShm data node configuration parameter

    表21.90此表提供useshm数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.6
    Type or units boolean
    Default false
    Range true, false
    Restart Type S

    Use shared memory connections between this data node and the API node also running on this host. Set to 1 to enable.

    在这个数据节点和这个主机上运行的api节点之间使用共享内存连接。设置为1以启用。

    See Section 21.3.3.12, “NDB Cluster Shared Memory Connections”, for more information.

    有关更多信息,请参阅21.3.3.12节,“ndb群集共享内存连接”。

Controlling Timeouts, Intervals, and Disk Paging

There are a number of [ndbd] parameters specifying timeouts and intervals between various actions in Cluster data nodes. Most of the timeout values are specified in milliseconds. Any exceptions to this are mentioned where applicable.

有许多[ndbd]参数指定集群数据节点中各种操作之间的超时和间隔。大多数超时值是以毫秒为单位指定的。在适用的情况下,将提及任何例外情况。

  • TimeBetweenWatchDogCheck

    WatchDogCheck之间的时间

    Table 21.91 This table provides type and value information for the TimeBetweenWatchDogCheck data node configuration parameter

    表21.91此表提供TimeBetweenWatchDogCheck数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 6000
    Range 70 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    To prevent the main thread from getting stuck in an endless loop at some point, a watchdog thread checks the main thread. This parameter specifies the number of milliseconds between checks. If the process remains in the same state after three checks, the watchdog thread terminates it.

    为了防止主线程在某个时刻陷入无休止的循环,一个“看门狗”线程检查主线程。此参数指定检查之间的毫秒数。如果进程在三次检查后仍处于相同的状态,则看门狗线程将终止它。

    This parameter can easily be changed for purposes of experimentation or to adapt to local conditions. It can be specified on a per-node basis although there seems to be little reason for doing so.

    该参数可以很容易地改变,以便进行试验或适应当地条件。它可以在每个节点的基础上指定,尽管这样做似乎没有什么理由。

    The default timeout is 6000 milliseconds (6 seconds).

    默认超时为6000毫秒(6秒)。

  • TimeBetweenWatchDogCheckInitial

    WatchDogCheckInitial之间的时间

    Table 21.92 This table provides type and value information for the TimeBetweenWatchDogCheckInitial data node configuration parameter

    表21.92此表提供TimeBetweenWatchDogCheckInitial数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 6000
    Range 70 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This is similar to the TimeBetweenWatchDogCheck parameter, except that TimeBetweenWatchDogCheckInitial controls the amount of time that passes between execution checks inside a storage node in the early start phases during which memory is allocated.

    这与timebetweenwatchdogcheck参数类似,只是timebetweenwatchdogcheckinitial控制在分配内存的早期启动阶段存储节点内执行检查之间经过的时间量。

    The default timeout is 6000 milliseconds (6 seconds).

    默认超时为6000毫秒(6秒)。

  • StartPartialTimeout

    开始部分超时

    Table 21.93 This table provides type and value information for the StartPartialTimeout data node configuration parameter

    表21.93此表提供StartPartialTimeout数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 30000
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter specifies how long the Cluster waits for all data nodes to come up before the cluster initialization routine is invoked. This timeout is used to avoid a partial Cluster startup whenever possible.

    此参数指定在调用群集初始化例程之前,群集等待所有数据节点出现的时间。此超时用于尽可能避免部分群集启动。

    This parameter is overridden when performing an initial start or initial restart of the cluster.

    当执行群集的初始启动或初始重新启动时,将重写此参数。

    The default value is 30000 milliseconds (30 seconds). 0 disables the timeout, in which case the cluster may start only if all nodes are available.

    默认值为30000毫秒(30秒)。0禁用超时,在这种情况下,只有当所有节点都可用时,群集才能启动。

  • StartPartitionedTimeout

    开始部分超时

    Table 21.94 This table provides type and value information for the StartPartitionedTimeout data node configuration parameter

    表21.94此表提供StartPartitionedTimeout数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 60000
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    If the cluster is ready to start after waiting for StartPartialTimeout milliseconds but is still possibly in a partitioned state, the cluster waits until this timeout has also passed. If StartPartitionedTimeout is set to 0, the cluster waits indefinitely.

    如果群集在等待StartPartialTimeout毫秒后准备好启动,但可能仍处于分区状态,则群集将一直等待,直到该超时也已过去。如果startPartitionedTimeout设置为0,则群集将无限期等待。

    This parameter is overridden when performing an initial start or initial restart of the cluster.

    当执行群集的初始启动或初始重新启动时,将重写此参数。

    The default timeout is 60000 milliseconds (60 seconds).

    默认超时为60000毫秒(60秒)。

  • StartFailureTimeout

    启动失败超时

    Table 21.95 This table provides type and value information for the StartFailureTimeout data node configuration parameter

    表21.95此表提供StartFailureTimeout数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    If a data node has not completed its startup sequence within the time specified by this parameter, the node startup fails. Setting this parameter to 0 (the default value) means that no data node timeout is applied.

    如果数据节点在该参数指定的时间内未完成其启动序列,则节点启动将失败。将此参数设置为0(默认值)意味着不应用数据节点超时。

    For nonzero values, this parameter is measured in milliseconds. For data nodes containing extremely large amounts of data, this parameter should be increased. For example, in the case of a data node containing several gigabytes of data, a period as long as 10−15 minutes (that is, 600000 to 1000000 milliseconds) might be required to perform a node restart.

    对于非零值,此参数以毫秒为单位进行测量。对于包含大量数据的数据节点,应该增加此参数。例如,对于包含数GB数据的数据节点,可能需要长达10-15分钟(即600000到1000000毫秒)的时间来执行节点重新启动。

  • StartNoNodeGroupTimeout

    StartNondegroupTimeout

    Table 21.96 This table provides type and value information for the StartNoNodeGroupTimeout data node configuration parameter

    表21.96此表提供startnonodegrouptimeout数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 15000
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    When a data node is configured with Nodegroup = 65536, is regarded as not being assigned to any node group. When that is done, the cluster waits StartNoNodegroupTimeout milliseconds, then treats such nodes as though they had been added to the list passed to the --nowait-nodes option, and starts. The default value is 15000 (that is, the management server waits 15 seconds). Setting this parameter equal to 0 means that the cluster waits indefinitely.

    当数据节点配置为node group=65536时,将被视为未分配给任何节点组。完成后,集群等待startnonodegrouptimeout毫秒,然后将这些节点视为已添加到传递给--nowait nodes选项的列表中,然后启动。默认值为15000(即,管理服务器等待15秒)。将此参数设置为0意味着群集将无限期等待。

    StartNoNodegroupTimeout must be the same for all data nodes in the cluster; for this reason, you should always set it in the [ndbd default] section of the config.ini file, rather than for individual data nodes.

    对于群集中的所有数据节点,startnonodegrouptimeout必须相同;因此,您应该始终在config.ini文件的[ndbd default]部分设置它,而不是为单个数据节点设置它。

    See Section 21.5.15, “Adding NDB Cluster Data Nodes Online”, for more information.

    有关更多信息,请参阅21.5.15节,“在线添加ndb集群数据节点”。

  • HeartbeatIntervalDbDb

    心跳间隔数据库

    Table 21.97 This table provides type and value information for the HeartbeatIntervalDbDb data node configuration parameter

    表21.97此表提供HeartBeatIntervalDBDB数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 5000
    Range 10 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    One of the primary methods of discovering failed nodes is by the use of heartbeats. This parameter states how often heartbeat signals are sent and how often to expect to receive them. Heartbeats cannot be disabled.

    发现失败节点的主要方法之一是使用心跳。此参数说明发送心跳信号的频率以及期望接收这些信号的频率。无法禁用心跳。

    After missing four heartbeat intervals in a row, the node is declared dead. Thus, the maximum time for discovering a failure through the heartbeat mechanism is five times the heartbeat interval.

    在一行中缺少四个心跳间隔之后,该节点被声明为死亡。因此,通过心跳机制发现故障的最大时间是心跳间隔的五倍。

    The default heartbeat interval is 5000 milliseconds (5 seconds). This parameter must not be changed drastically and should not vary widely between nodes. If one node uses 5000 milliseconds and the node watching it uses 1000 milliseconds, obviously the node will be declared dead very quickly. This parameter can be changed during an online software upgrade, but only in small increments.

    默认心跳间隔为5000毫秒(5秒)。此参数不能剧烈更改,节点之间的变化也不应很大。如果一个节点使用5000毫秒,而监视它的节点使用1000毫秒,显然该节点将很快被声明为死亡。此参数可以在联机软件升级期间更改,但只能以小增量更改。

    See also Network communication and latency, as well as the description of the ConnectCheckIntervalDelay configuration parameter.

    另请参阅网络通信和延迟,以及connectcheckintervaldelay配置参数的说明。

  • HeartbeatIntervalDbApi

    心跳间隔

    Table 21.98 This table provides type and value information for the HeartbeatIntervalDbApi data node configuration parameter

    表21.98此表提供HeartBeatIntervalDBAPI数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 1500
    Range 100 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Each data node sends heartbeat signals to each MySQL server (SQL node) to ensure that it remains in contact. If a MySQL server fails to send a heartbeat in time it is declared dead, in which case all ongoing transactions are completed and all resources released. The SQL node cannot reconnect until all activities initiated by the previous MySQL instance have been completed. The three-heartbeat criteria for this determination are the same as described for HeartbeatIntervalDbDb.

    每个数据节点向每个mysql服务器(sql节点)发送心跳信号,以确保它保持联系。如果mysql服务器未能及时发送心跳信号,则将其声明为“死亡”,在这种情况下,将完成所有正在进行的事务并释放所有资源。在前一个mysql实例启动的所有活动都完成之前,sql节点无法重新连接。此确定的三个心跳标准与HeartBeatIntervalDBDB的描述相同。

    The default interval is 1500 milliseconds (1.5 seconds). This interval can vary between individual data nodes because each data node watches the MySQL servers connected to it, independently of all other data nodes.

    默认间隔为1500毫秒(1.5秒)。由于每个数据节点独立于所有其他数据节点监视与其连接的mysql服务器,因此每个数据节点之间的间隔可能会有所不同。

    For more information, see Network communication and latency.

    有关更多信息,请参阅网络通信和延迟。

  • HeartbeatOrder

    心跳顺序

    Table 21.99 This table provides type and value information for the HeartbeatOrder data node configuration parameter

    表21.99此表提供HeartBeatOrder数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 0
    Range 0 - 65535
    Restart Type S

    Data nodes send heartbeats to one another in a circular fashion whereby each data node monitors the previous one. If a heartbeat is not detected by a given data node, this node declares the previous data node in the circle dead (that is, no longer accessible by the cluster). The determination that a data node is dead is done globally; in other words; once a data node is declared dead, it is regarded as such by all nodes in the cluster.

    数据节点以循环方式彼此发送心跳信号,从而每个数据节点监视上一个数据节点。如果给定的数据节点未检测到心跳,则此节点会声明循环中的上一个数据节点“死亡”(即群集不再可访问)。数据节点死机的确定是全局性的;换句话说,一旦数据节点被声明为死机,集群中的所有节点都会将其视为死机。

    It is possible for heartbeats between data nodes residing on different hosts to be too slow compared to heartbeats between other pairs of nodes (for example, due to a very low heartbeat interval or temporary connection problem), such that a data node is declared dead, even though the node can still function as part of the cluster. .

    与其他节点对之间的心跳相比(例如,由于非常低的心跳间隔或临时连接问题),驻留在不同主机上的数据节点之间的心跳可能太慢,以至于即使该节点仍然可以作为群集的一部分工作,数据节点也被声明为已死亡。是的。

    In this type of situation, it may be that the order in which heartbeats are transmitted between data nodes makes a difference as to whether or not a particular data node is declared dead. If this declaration occurs unnecessarily, this can in turn lead to the unnecessary loss of a node group and as thus to a failure of the cluster.

    在这种情况下,在数据节点之间发送心跳的顺序可能会对某个特定的数据节点是否被声明为死亡产生影响。如果此声明不必要地发生,则会导致节点组不必要地丢失,从而导致群集失败。

    Consider a setup where there are 4 data nodes A, B, C, and D running on 2 host computers host1 and host2, and that these data nodes make up 2 node groups, as shown in the following table:

    考虑一个设置,其中有4个数据节点A、B、C和D运行在2台主机HOST1和HOST2上,这些数据节点组成2个节点组,如下表所示:

    Table 21.100 Four data nodes A, B, C, D running on two host computers host1, host2; each data node belongs to one of two node groups.

    表21.100四个数据节点a、b、c、d运行在两台主机host1、host2上;每个数据节点属于两个节点组中的一个。

    Node Group Nodes Running on host1 Nodes Running on host2
    Node Group 0: Node A Node B
    Node Group 1: Node C Node D

    Suppose the heartbeats are transmitted in the order A->B->C->D->A. In this case, the loss of the heartbeat between the hosts causes node B to declare node A dead and node C to declare node B dead. This results in loss of Node Group 0, and so the cluster fails. On the other hand, if the order of transmission is A->B->D->C->A (and all other conditions remain as previously stated), the loss of the heartbeat causes nodes A and D to be declared dead; in this case, each node group has one surviving node, and the cluster survives.

    假设心跳按a->b->c->d->a的顺序传输。在这种情况下,主机之间的心跳丢失导致节点b声明节点a死亡,节点c声明节点b死亡。这将导致丢失节点组0,因此群集将失败。另一方面,如果传输顺序是a->b->d->c->a(并且所有其他条件都保持前面所述的状态),则心跳丢失会导致节点a和d被声明为死亡;在这种情况下,每个节点组都有一个幸存的节点,集群生存下来。

    The HeartbeatOrder configuration parameter makes the order of heartbeat transmission user-configurable. The default value for HeartbeatOrder is zero; allowing the default value to be used on all data nodes causes the order of heartbeat transmission to be determined by NDB. If this parameter is used, it must be set to a nonzero value (maximum 65535) for every data node in the cluster, and this value must be unique for each data node; this causes the heartbeat transmission to proceed from data node to data node in the order of their HeartbeatOrder values from lowest to highest (and then directly from the data node having the highest HeartbeatOrder to the data node having the lowest value, to complete the circle). The values need not be consecutive. For example, to force the heartbeat transmission order A->B->D->C->A in the scenario outlined previously, you could set the HeartbeatOrder values as shown here:

    HeartBeatOrder配置参数使用户可以配置心跳传输的顺序。heartbeat order的默认值为零;允许在所有数据节点上使用默认值会导致心跳传输的顺序由ndb决定。如果使用此参数,则必须将其设置为簇中的每个数据节点的非零值(最大值65535),并且该值对于每个数据节点必须是唯一的;这将导致心跳传输按其heartBeatOrder值从低到高的顺序从数据节点到数据节点进行(然后直接从具有最高heartBeatOrder的数据节点到具有最低值的数据节点完成循环)。这些值不必是连续的。例如,要在前面概述的场景中强制执行heartbeat order传输顺序a->b->d->c->a,可以设置heartbeatorder值,如下所示:

    Table 21.101 HeartbeatOrder values to force a heartbeat transition order of A->B->D->C->A.

    表21.101 heartbeat order值强制执行心跳转换顺序a->b->d->c->a。

    Node HeartbeatOrder Value
    A 10
    B 20
    C 30
    D 25

    To use this parameter to change the heartbeat transmission order in a running NDB Cluster, you must first set HeartbeatOrder for each data node in the cluster in the global configuration (config.ini) file (or files). To cause the change to take effect, you must perform either of the following:

    要使用此参数更改正在运行的ndb集群中的心跳传输顺序,必须首先在全局配置(config.ini)文件中为集群中的每个数据节点设置heartbeat order。要使更改生效,必须执行以下任一操作:

    • A complete shutdown and restart of the entire cluster.

      整个群集的完全关闭和重新启动。

    • 2 rolling restarts of the cluster in succession. All nodes must be restarted in the same order in both rolling restarts.

      集群连续2次滚动重启。在两次滚动重新启动中,必须以相同的顺序重新启动所有节点。

    You can use DUMP 908 to observe the effect of this parameter in the data node logs.

    可以使用dump 908在数据节点日志中观察此参数的效果。

  • ConnectCheckIntervalDelay

    连接检查间隔延迟

    Table 21.102 This table provides type and value information for the ConnectCheckIntervalDelay data node configuration parameter

    表21.102此表提供了connectcheckintervaldelay数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter enables connection checking between data nodes after one of them has failed heartbeat checks for 5 intervals of up to HeartbeatIntervalDbDb milliseconds.

    此参数在其中一个数据节点的心跳检查失败后启用数据节点之间的连接检查,间隔5毫秒,最长可达HeartBeatIntervalDBDB毫秒。

    Such a data node that further fails to respond within an interval of ConnectCheckIntervalDelay milliseconds is considered suspect, and is considered dead after two such intervals. This can be useful in setups with known latency issues.

    这样的数据节点如果在connectcheckintervaldelay毫秒的间隔内进一步没有响应,则被认为是可疑的,并且在两个这样的间隔之后被认为是死的。这在具有已知延迟问题的设置中非常有用。

    The default value for this parameter is 0 (disabled).

    此参数的默认值为0(禁用)。

  • TimeBetweenLocalCheckpoints

    本地检查点之间的时间间隔

    Table 21.103 This table provides type and value information for the TimeBetweenLocalCheckpoints data node configuration parameter

    表21.103此表提供了本地检查点数据节点配置参数之间的时间间隔的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units number of 4-byte words, as a base-2 logarithm
    Default 20
    Range 0 - 31
    Restart Type N

    This parameter is an exception in that it does not specify a time to wait before starting a new local checkpoint; rather, it is used to ensure that local checkpoints are not performed in a cluster where relatively few updates are taking place. In most clusters with high update rates, it is likely that a new local checkpoint is started immediately after the previous one has been completed.

    此参数是一个例外,因为它没有指定在启动新的本地检查点之前等待的时间;相反,它用于确保本地检查点不会在更新相对较少的集群中执行。在大多数更新率高的集群中,很可能在前一个本地检查点完成后立即启动新的本地检查点。

    The size of all write operations executed since the start of the previous local checkpoints is added. This parameter is also exceptional in that it is specified as the base-2 logarithm of the number of 4-byte words, so that the default value 20 means 4MB (4 × 220) of write operations, 21 would mean 8MB, and so on up to a maximum value of 31, which equates to 8GB of write operations.

    自添加上一个本地检查点开始以来执行的所有写操作的大小。这个参数也是例外的,因为它被指定为4字节字的BASE-2对数,因此默认值20意味着4MB(4×220)的写操作,21将意味着8MB,等等,直到最大值为31,相当于8GB的写操作。

    All the write operations in the cluster are added together. Setting TimeBetweenLocalCheckpoints to 6 or less means that local checkpoints will be executed continuously without pause, independent of the cluster's workload.

    群集中的所有写操作都被加在一起。将本地检查点之间的时间间隔设置为6或更少意味着本地检查点将不间断地连续执行,与集群的工作负载无关。

  • TimeBetweenGlobalCheckpoints

    全局检查点之间的时间

    Table 21.104 This table provides type and value information for the TimeBetweenGlobalCheckpoints data node configuration parameter

    表21.104此表提供了全局检查点数据节点配置参数TimeBetween的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 2000
    Range 20 - 32000
    Restart Type N

    When a transaction is committed, it is committed in main memory in all nodes on which the data is mirrored. However, transaction log records are not flushed to disk as part of the commit. The reasoning behind this behavior is that having the transaction safely committed on at least two autonomous host machines should meet reasonable standards for durability.

    当事务被提交时,它将被提交到镜像数据的所有节点的主内存中。但是,事务日志记录不会作为提交的一部分刷新到磁盘。这种行为背后的原因是,将事务安全地提交到至少两台自主主机上应该符合合理的持久性标准。

    It is also important to ensure that even the worst of cases—a complete crash of the cluster—is handled properly. To guarantee that this happens, all transactions taking place within a given interval are put into a global checkpoint, which can be thought of as a set of committed transactions that has been flushed to disk. In other words, as part of the commit process, a transaction is placed in a global checkpoint group. Later, this group's log records are flushed to disk, and then the entire group of transactions is safely committed to disk on all computers in the cluster.

    同样重要的是,即使是最坏的情况——集群的完全崩溃——也要得到正确的处理。为了保证这种情况的发生,在给定时间间隔内发生的所有事务都被放入一个全局检查点,该检查点可以看作是一组已提交的事务,这些事务已被刷新到磁盘。换句话说,作为提交过程的一部分,事务被放置在全局检查点组中。稍后,该组的日志记录刷新到磁盘,然后将整个事务组安全地提交到群集中所有计算机上的磁盘。

    This parameter defines the interval between global checkpoints. The default is 2000 milliseconds.

    此参数定义全局检查点之间的间隔。默认值为2000毫秒。

  • TimeBetweenGlobalCheckpointsTimeout

    全局检查点时间间隔

    Table 21.105 This table provides type and value information for the TimeBetweenGlobalCheckpointsTimeout data node configuration parameter

    表21.105此表提供全局检查点估计数据节点配置参数TimeBetween的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 120000
    Range 10 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter defines the minimum timeout between global checkpoints. The default is 120000 milliseconds.

    此参数定义全局检查点之间的最小超时。默认值为120000毫秒。

  • TimeBetweenEpochs

    时间间隔

    Table 21.106 This table provides type and value information for the TimeBetweenEpochs data node configuration parameter

    表21.106此表提供了TimeBetweePochs数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 100
    Range 0 - 32000
    Restart Type N

    This parameter defines the interval between synchronization epochs for NDB Cluster Replication. The default value is 100 milliseconds.

    此参数定义用于ndb群集复制的同步时段之间的间隔。默认值为100毫秒。

    TimeBetweenEpochs is part of the implementation of micro-GCPs, which can be used to improve the performance of NDB Cluster Replication.

    timebeteepochs是micro-gcps实现的一部分,可以用来提高ndb集群复制的性能。

  • TimeBetweenEpochsTimeout

    时间间隔

    Table 21.107 This table provides type and value information for the TimeBetweenEpochsTimeout data node configuration parameter

    表21.107此表提供了TimeBetweePochTimeout数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 0
    Range 0 - 256000
    Restart Type N

    This parameter defines a timeout for synchronization epochs for NDB Cluster Replication. If a node fails to participate in a global checkpoint within the time determined by this parameter, the node is shut down. The default value is 0; in other words, the timeout is disabled.

    此参数定义用于ndb群集复制的同步时段的超时。如果节点未能在该参数确定的时间内参与全局检查点,则该节点将被关闭。默认值为0;换句话说,超时被禁用。

    TimeBetweenEpochsTimeout is part of the implementation of micro-GCPs, which can be used to improve the performance of NDB Cluster Replication.

    timebeteepochstimeout是micro-gcps实现的一部分,可以用来提高ndb集群复制的性能。

    The current value of this parameter and a warning are written to the cluster log whenever a GCP save takes longer than 1 minute or a GCP commit takes longer than 10 seconds.

    当gcp保存时间超过1分钟或gcp提交时间超过10秒时,此参数的当前值和警告将写入集群日志。

    Setting this parameter to zero has the effect of disabling GCP stops caused by save timeouts, commit timeouts, or both. The maximum possible value for this parameter is 256000 milliseconds.

    将此参数设置为零会导致禁用由保存超时和/或提交超时引起的gcp停止。此参数的最大可能值为256000毫秒。

  • MaxBufferedEpochs

    MaxBufferedepochs公司

    Table 21.108 This table provides type and value information for the MaxBufferedEpochs data node configuration parameter

    表21.108此表提供MaxBufferedepochs数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units epochs
    Default 100
    Range 0 - 100000
    Restart Type N

    The number of unprocessed epochs by which a subscribing node can lag behind. Exceeding this number causes a lagging subscriber to be disconnected.

    订阅节点可以延迟的未处理的时间段数。超过此数字将导致延迟订阅服务器断开连接。

    The default value of 100 is sufficient for most normal operations. If a subscribing node does lag enough to cause disconnections, it is usually due to network or scheduling issues with regard to processes or threads. (In rare circumstances, the problem may be due to a bug in the NDB client.) It may be desirable to set the value lower than the default when epochs are longer.

    默认值100对于大多数正常操作来说已经足够了。如果订阅节点的延迟足以导致断开连接,则通常是由于与进程或线程有关的网络或调度问题。(在极少数情况下,问题可能是由于ndb客户机中的一个错误造成的。)当时间段较长时,最好将该值设置为低于默认值。

    Disconnection prevents client issues from affecting the data node service, running out of memory to buffer data, and eventually shutting down. Instead, only the client is affected as a result of the disconnect (by, for example gap events in the binary log), forcing the client to reconnect or restart the process.

    断开连接可防止客户端问题影响数据节点服务、耗尽内存以缓冲数据并最终关闭。相反,只有客户端会因为断开连接而受到影响(例如,通过二进制日志中的gap事件),从而迫使客户端重新连接或重新启动进程。

  • MaxBufferedEpochBytes

    最大缓冲容量

    Table 21.109 This table provides type and value information for the MaxBufferedEpochBytes data node configuration parameter

    表21.109此表提供maxBufferedepochbytes数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 26214400
    Range 26214400 (0x01900000) - 4294967039 (0xFFFFFEFF)
    Restart Type N

    The total number of bytes allocated for buffering epochs by this node.

    此节点分配给缓冲时段的总字节数。

  • TimeBetweenInactiveTransactionAbortCheck

    InactiveTransactionBortcheck之间的时间

    Table 21.110 This table provides type and value information for the TimeBetweenInactiveTransactionAbortCheck data node configuration parameter

    表21.110此表提供了TimeBetweenInactiveTransactionaBortcheck数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 1000
    Range 1000 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Timeout handling is performed by checking a timer on each transaction once for every interval specified by this parameter. Thus, if this parameter is set to 1000 milliseconds, every transaction will be checked for timing out once per second.

    超时处理是通过在每个事务上为该参数指定的每个间隔检查一次计时器来执行的。因此,如果此参数设置为1000毫秒,则每秒将检查每个事务是否超时一次。

    The default value is 1000 milliseconds (1 second).

    默认值为1000毫秒(1秒)。

  • TransactionInactiveTimeout

    事务处理活动超时

    Table 21.111 This table provides type and value information for the TransactionInactiveTimeout data node configuration parameter

    表21.111此表提供TransactionActiveTimeout数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default [see text]
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter states the maximum time that is permitted to lapse between operations in the same transaction before the transaction is aborted.

    此参数声明在事务中止之前允许在同一事务中的操作之间的最大时间。

    The default for this parameter is 4G (also the maximum). For a real-time database that needs to ensure that no transaction keeps locks for too long, this parameter should be set to a relatively small value. Setting it to 0 means that the application never times out. The unit is milliseconds.

    这个参数的默认值是4G(也是最大值)。对于需要确保没有事务将锁保留太长时间的实时数据库,应将此参数设置为相对较小的值。将其设置为0意味着应用程序永远不会超时。单位是毫秒。

  • TransactionDeadlockDetectionTimeout

    事务死锁检测超时

    Table 21.112 This table provides type and value information for the TransactionDeadlockDetectionTimeout data node configuration parameter

    表21.112此表提供TransactionDeadLockDetectionTimeout数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 1200
    Range 50 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    When a node executes a query involving a transaction, the node waits for the other nodes in the cluster to respond before continuing. This parameter sets the amount of time that the transaction can spend executing within a data node, that is, the time that the transaction coordinator waits for each data node participating in the transaction to execute a request.

    当一个节点执行一个涉及事务的查询时,该节点在继续之前等待集群中的其他节点响应。此参数设置事务可以在数据节点内执行的时间量,即事务协调器等待参与事务的每个数据节点执行请求的时间。

    A failure to respond can occur for any of the following reasons:

    由于以下原因之一,可能会出现响应失败:

    • The node is dead

      节点“死了”

    • The operation has entered a lock queue

      操作已进入锁定队列

    • The node requested to perform the action could be heavily overloaded.

      请求执行该操作的节点可能严重过载。

    This timeout parameter states how long the transaction coordinator waits for query execution by another node before aborting the transaction, and is important for both node failure handling and deadlock detection.

    此超时参数说明事务协调器在中止事务之前等待另一个节点执行查询的时间,对于节点故障处理和死锁检测都很重要。

    The default timeout value is 1200 milliseconds (1.2 seconds).

    默认超时值为1200毫秒(1.2秒)。

    The minimum for this parameter is 50 milliseconds.

    此参数的最小值为50毫秒。

  • DiskSyncSize

    磁盘同步大小

    Table 21.113 This table provides type and value information for the DiskSyncSize data node configuration parameter

    表21.113此表提供了disksyncSize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 4M
    Range 32K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This is the maximum number of bytes to store before flushing data to a local checkpoint file. This is done to prevent write buffering, which can impede performance significantly. This parameter is not intended to take the place of TimeBetweenLocalCheckpoints.

    这是在将数据刷新到本地检查点文件之前要存储的最大字节数。这样做是为了防止写缓冲,而写缓冲会严重影响性能。此参数不用于代替本地检查点之间的时间。

    Note

    When ODirect is enabled, it is not necessary to set DiskSyncSize; in fact, in such cases its value is simply ignored.

    启用odirect时,不必设置disksyncSize;实际上,在这种情况下,它的值会被忽略。

    The default value is 4M (4 megabytes).

    默认值为4M(4兆字节)。

  • MaxDiskWriteSpeed

    最大磁盘写入速度

    Table 21.114 This table provides type and value information for the MaxDiskWriteSpeed data node configuration parameter

    表21.114此表提供maxdiskwritespeed数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 20M
    Range 1M - 1024G
    Restart Type S

    Set the maximum rate for writing to disk, in bytes per second, by local checkpoints and backup operations when no restarts (by this data node or any other data node) are taking place in this NDB Cluster.

    在NDB集群中没有重新启动(由该数据节点或任何其他数据节点)时,以每秒字节数的方式设置写入磁盘的最大速率,由本地检查点和备份操作。

    For setting the maximum rate of disk writes allowed while this data node is restarting, use MaxDiskWriteSpeedOwnRestart. For setting the maximum rate of disk writes allowed while other data nodes are restarting, use MaxDiskWriteSpeedOtherNodeRestart. The minimum speed for disk writes by all LCPs and backup operations can be adjusted by setting MinDiskWriteSpeed.

    若要在该数据节点重新启动时设置磁盘写入的最大速率,请使用Max DISKReScript PoEdWorkRead。若要在其他数据节点重新启动时设置磁盘写入的最大速率,请使用Max DISKReScript PoEdToNoDeDestART。所有LCP和备份操作的磁盘写入最低速度可以通过设置mindiskwritespeed进行调整。

  • MaxDiskWriteSpeedOtherNodeRestart

    MaxDiskWriteSpeedOtherNodereStart

    Table 21.115 This table provides type and value information for the MaxDiskWriteSpeedOtherNodeRestart data node configuration parameter

    表21.115此表提供maxdiskwritespeedothernoderestart数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 50M
    Range 1M - 1024G
    Restart Type S

    Set the maximum rate for writing to disk, in bytes per second, by local checkpoints and backup operations when one or more data nodes in this NDB Cluster are restarting, other than this node.

    当该NDB集群中的一个或多个数据节点重新启动时,除此节点之外,以每秒字节为单位,通过本地检查点和备份操作来设置写入磁盘的最大速率。

    For setting the maximum rate of disk writes allowed while this data node is restarting, use MaxDiskWriteSpeedOwnRestart. For setting the maximum rate of disk writes allowed when no data nodes are restarting anywhere in the cluster, use MaxDiskWriteSpeed. The minimum speed for disk writes by all LCPs and backup operations can be adjusted by setting MinDiskWriteSpeed.

    若要在该数据节点重新启动时设置磁盘写入的最大速率,请使用Max DISKReScript PoEdWorkRead。若要在集群中没有任何数据节点重新启动时设置磁盘写入的最大速率,请使用Max DISKReuleSePED。所有LCP和备份操作的磁盘写入最低速度可以通过设置mindiskwritespeed进行调整。

  • MaxDiskWriteSpeedOwnRestart

    MaxDiskWriteSeedownRestart

    Table 21.116 This table provides type and value information for the MaxDiskWriteSpeedOwnRestart data node configuration parameter

    表21.116此表提供maxdiskwritespeedownrestart数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 200M
    Range 1M - 1024G
    Restart Type S

    Set the maximum rate for writing to disk, in bytes per second, by local checkpoints and backup operations while this data node is restarting.

    在该数据节点重新启动时,以每秒字节为单位,通过本地检查点和备份操作设置写入磁盘的最大速率。

    For setting the maximum rate of disk writes allowed while other data nodes are restarting, use MaxDiskWriteSpeedOtherNodeRestart. For setting the maximum rate of disk writes allowed when no data nodes are restarting anywhere in the cluster, use MaxDiskWriteSpeed. The minimum speed for disk writes by all LCPs and backup operations can be adjusted by setting MinDiskWriteSpeed.

    若要在其他数据节点重新启动时设置磁盘写入的最大速率,请使用Max DISKReScript PoEdToNoDeDestART。若要在集群中没有任何数据节点重新启动时设置磁盘写入的最大速率,请使用Max DISKReuleSePED。所有LCP和备份操作的磁盘写入最低速度可以通过设置mindiskwritespeed进行调整。

  • MinDiskWriteSpeed

    Mindiskwritespeed公司

    Table 21.117 This table provides type and value information for the MinDiskWriteSpeed data node configuration parameter

    表21.117此表提供mindiskwritespeed数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 10M
    Range 1M - 1024G
    Restart Type S

    Set the minimum rate for writing to disk, in bytes per second, by local checkpoints and backup operations.

    设置本地检查点和备份操作写入磁盘的最小速率(字节/秒)。

    The maximum rates of disk writes allowed for LCPs and backups under various conditions are adjustable using the parameters MaxDiskWriteSpeed, MaxDiskWriteSpeedOwnRestart, and MaxDiskWriteSpeedOtherNodeRestart. See the descriptions of these parameters for more information.

    在各种条件下允许LCPS和备份的磁盘写入的最大速率可使用参数Max DISKRealSeBEDE、Max DISKReScript PoEdOutRead和Max DISKReScript PoEdToNoDeDestART进行调整。有关详细信息,请参见这些参数的说明。

  • ArbitrationTimeout

    仲裁超时

    Table 21.118 This table provides type and value information for the ArbitrationTimeout data node configuration parameter

    表21.118此表提供仲裁超时数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 7500
    Range 10 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter specifies how long data nodes wait for a response from the arbitrator to an arbitration message. If this is exceeded, the network is assumed to have split.

    此参数指定数据节点等待仲裁器对仲裁消息的响应的时间。如果超过此值,则假定网络已拆分。

    The default value is 7500 milliseconds (7.5 seconds).

    默认值为7500毫秒(7.5秒)。

  • Arbitration

    仲裁

    Table 21.119 This table provides type and value information for the Arbitration data node configuration parameter

    表21.119此表提供仲裁数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units enumeration
    Default Default
    Range Default, Disabled, WaitExternal
    Restart Type N

    The Arbitration parameter enables a choice of arbitration schemes, corresponding to one of 3 possible values for this parameter:

    仲裁参数允许选择仲裁方案,对应于此参数的三个可能值之一:

    • Default.  This enables arbitration to proceed normally, as determined by the ArbitrationRank settings for the management and API nodes. This is the default value.

      违约。这使仲裁能够正常进行,这取决于管理和api节点的仲裁秩设置。这是默认值。

    • Disabled.  Setting Arbitration = Disabled in the [ndbd default] section of the config.ini file to accomplishes the same task as setting ArbitrationRank to 0 on all management and API nodes. When Arbitration is set in this way, any ArbitrationRank settings are ignored.

      已禁用。在config.ini文件的[ndbd default]部分中设置仲裁=disabled,以完成与在所有管理和api节点上将仲裁等级设置为0相同的任务。以这种方式设置仲裁时,将忽略任何仲裁秩设置。

    • WaitExternal.  The Arbitration parameter also makes it possible to configure arbitration in such a way that the cluster waits until after the time determined by ArbitrationTimeout has passed for an external cluster manager application to perform arbitration instead of handling arbitration internally. This can be done by setting Arbitration = WaitExternal in the [ndbd default] section of the config.ini file. For best results with the WaitExternal setting, it is recommended that ArbitrationTimeout be 2 times as long as the interval required by the external cluster manager to perform arbitration.

      等待外部。仲裁参数还使配置仲裁成为可能,这样,群集将等待外部群集管理器应用程序执行仲裁,而不是在内部处理仲裁,直到通过仲裁超时确定的时间之后。这可以通过在config.ini文件的[ndbd default]部分设置arbitration=waitexternal来完成。为获得waitexternal设置的最佳结果,建议仲裁超时为外部群集管理器执行仲裁所需时间间隔的2倍。

    Important

    This parameter should be used only in the [ndbd default] section of the cluster configuration file. The behavior of the cluster is unspecified when Arbitration is set to different values for individual data nodes.

    此参数只能在集群配置文件的[ndbd default]部分中使用。当仲裁设置为单个数据节点的不同值时,群集的行为未指定。

  • RestartSubscriberConnectTimeout

    RestartSubscriberConnectTimeout

    Table 21.120 This table provides type and value information for the RestartSubscriberConnectTimeout data node configuration parameter

    表21.120此表提供了RestartSubscriberConnectTimeout数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units ms
    Default 12000
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type S

    This parameter determines the time that a data node waits for subscribing API nodes to connect. Once this timeout expires, any missing API nodes are disconnected from the cluster. To disable this timeout, set RestartSubscriberConnectTimeout to 0.

    此参数确定数据节点等待订阅API节点连接的时间。一旦超时过期,任何“丢失”的api节点都将与集群断开连接。若要禁用此超时,请将RestartSubscriberConnectTimeout设置为0。

    While this parameter is specified in milliseconds, the timeout itself is resolved to the next-greatest whole second.

    虽然此参数以毫秒为单位指定,但超时本身将解析为下一个最大的整秒。

Buffering and logging.  Several [ndbd] configuration parameters enable the advanced user to have more control over the resources used by node processes and to adjust various buffer sizes at need.

缓冲和日志记录。几个[ndbd]配置参数使高级用户能够更好地控制节点进程使用的资源,并根据需要调整各种缓冲区大小。

These buffers are used as front ends to the file system when writing log records to disk. If the node is running in diskless mode, these parameters can be set to their minimum values without penalty due to the fact that disk writes are faked by the NDB storage engine's file system abstraction layer.

将日志记录写入磁盘时,这些缓冲区用作文件系统的前端。如果节点在无盘模式下运行,则可以将这些参数设置为其最小值,而不会因磁盘写入被ndb存储引擎的文件系统抽象层“伪造”而受到惩罚。

  • UndoIndexBuffer

    撤消索引缓冲区

    Table 21.121 This table provides type and value information for the UndoIndexBuffer data node configuration parameter

    表21.121此表提供UndoIndexBuffer数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 2M
    Range 1M - 4294967039 (0xFFFFFEFF)
    Restart Type N

    The UNDO index buffer, whose size is set by this parameter, is used during local checkpoints. The NDB storage engine uses a recovery scheme based on checkpoint consistency in conjunction with an operational REDO log. To produce a consistent checkpoint without blocking the entire system for writes, UNDO logging is done while performing the local checkpoint. UNDO logging is activated on a single table fragment at a time. This optimization is possible because tables are stored entirely in main memory.

    撤消索引缓冲区的大小由此参数设置,在本地检查点期间使用。ndb存储引擎将基于检查点一致性的恢复方案与操作重做日志结合使用。为了在不阻塞整个系统进行写入的情况下生成一致的检查点,在执行本地检查点时将执行撤消日志记录。撤消日志记录一次在单个表片段上激活。这种优化是可能的,因为表完全存储在主内存中。

    The UNDO index buffer is used for the updates on the primary key hash index. Inserts and deletes rearrange the hash index; the NDB storage engine writes UNDO log records that map all physical changes to an index page so that they can be undone at system restart. It also logs all active insert operations for each fragment at the start of a local checkpoint.

    撤消索引缓冲区用于更新主键哈希索引。插入和删除重新排列哈希索引;ndb存储引擎写入将所有物理更改映射到索引页的撤消日志记录,以便在系统重新启动时撤消这些更改。它还记录本地检查点开始时每个片段的所有活动插入操作。

    Reads and updates set lock bits and update a header in the hash index entry. These changes are handled by the page-writing algorithm to ensure that these operations need no UNDO logging.

    读取并更新设置的锁定位,并更新哈希索引项中的头。这些更改由页面写入算法处理,以确保这些操作不需要撤消日志记录。

    This buffer is 2MB by default. The minimum value is 1MB, which is sufficient for most applications. For applications doing extremely large or numerous inserts and deletes together with large transactions and large primary keys, it may be necessary to increase the size of this buffer. If this buffer is too small, the NDB storage engine issues internal error code 677 (Index UNDO buffers overloaded).

    默认情况下,这个缓冲区是2 MB。最小值为1MB,这对于大多数应用程序来说已经足够了。对于执行非常大或大量插入和删除以及大型事务和大型主键的应用程序,可能需要增加此缓冲区的大小。如果此缓冲区太小,则ndb存储引擎会发出内部错误代码677(索引撤消缓冲区过载)。

    Important

    It is not safe to decrease the value of this parameter during a rolling restart.

    在滚动重新启动期间降低此参数的值是不安全的。

  • UndoDataBuffer

    撤消数据库缓冲

    Table 21.122 This table provides type and value information for the UndoDataBuffer data node configuration parameter

    表21.122此表提供undodatabuffer数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 16M
    Range 1M - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter sets the size of the UNDO data buffer, which performs a function similar to that of the UNDO index buffer, except the UNDO data buffer is used with regard to data memory rather than index memory. This buffer is used during the local checkpoint phase of a fragment for inserts, deletes, and updates.

    此参数设置撤消数据缓冲区的大小,该缓冲区执行类似于撤消索引缓冲区的功能,但撤消数据缓冲区用于数据内存而不是索引内存。此缓冲区在片段的本地检查点阶段用于插入、删除和更新。

    Because UNDO log entries tend to grow larger as more operations are logged, this buffer is also larger than its index memory counterpart, with a default value of 16MB.

    由于撤消日志条目往往会随着记录的操作的增多而增大,因此该缓冲区也比索引内存对应的缓冲区大,默认值为16MB。

    This amount of memory may be unnecessarily large for some applications. In such cases, it is possible to decrease this size to a minimum of 1MB.

    对于某些应用程序,此内存量可能不必要地大。在这种情况下,可以将此大小减小到最小1MB。

    It is rarely necessary to increase the size of this buffer. If there is such a need, it is a good idea to check whether the disks can actually handle the load caused by database update activity. A lack of sufficient disk space cannot be overcome by increasing the size of this buffer.

    很少有必要增加这个缓冲区的大小。如果有这样的需求,最好检查磁盘是否能够实际处理数据库更新活动引起的负载。无法通过增大此缓冲区的大小来克服磁盘空间不足的问题。

    If this buffer is too small and gets congested, the NDB storage engine issues internal error code 891 (Data UNDO buffers overloaded).

    如果此缓冲区太小并且变得拥挤,则ndb存储引擎会发出内部错误代码891(数据撤消缓冲区过载)。

    Important

    It is not safe to decrease the value of this parameter during a rolling restart.

    在滚动重新启动期间降低此参数的值是不安全的。

  • RedoBuffer

    重做缓冲区

    Table 21.123 This table provides type and value information for the RedoBuffer data node configuration parameter

    表21.123此表提供了redobuffer数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 32M
    Range 1M - 4294967039 (0xFFFFFEFF)
    Restart Type N

    All update activities also need to be logged. The REDO log makes it possible to replay these updates whenever the system is restarted. The NDB recovery algorithm uses a fuzzy checkpoint of the data together with the UNDO log, and then applies the REDO log to play back all changes up to the restoration point.

    还需要记录所有更新活动。重做日志允许在系统重新启动时重播这些更新。ndb恢复算法使用数据的“模糊”检查点和撤销日志,然后应用重做日志将所有更改回放到恢复点。

    RedoBuffer sets the size of the buffer in which the REDO log is written. The default value is 32MB; the minimum value is 1MB.

    redo buffer设置写入重做日志的缓冲区大小。默认值为32MB;最小值为1MB。

    If this buffer is too small, the NDB storage engine issues error code 1221 (REDO log buffers overloaded). For this reason, you should exercise care if you attempt to decrease the value of RedoBuffer as part of an online change in the cluster's configuration.

    如果此缓冲区太小,则ndb存储引擎将发出错误代码1221(重做日志缓冲区过载)。因此,如果在集群配置的联机更改过程中尝试减小redobuffer的值,则应格外小心。

    ndbmtd allocates a separate buffer for each LDM thread (see ThreadConfig). For example, with 4 LDM threads, an ndbmtd data node actually has 4 buffers and allocates RedoBuffer bytes to each one, for a total of 4 * RedoBuffer bytes.

    ndbmtd为每个ldm线程分配单独的缓冲区(请参阅threadconfig)。例如,对于4个ldm线程,一个ndbmtd数据节点实际上有4个缓冲区,并为每个缓冲区分配redobuffer字节,总共有4*redobuffer字节。

  • EventLogBufferSize

    事件日志缓冲区大小

    Table 21.124 This table provides type and value information for the EventLogBufferSize data node configuration parameter

    表21.124此表提供EventLogBufferSize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 8192
    Range 0 - 64K
    Restart Type S

    Controls the size of the circular buffer used for NDB log events within data nodes.

    控制数据节点内用于ndb日志事件的循环缓冲区的大小。

Controlling log messages.  In managing the cluster, it is very important to be able to control the number of log messages sent for various event types to stdout. For each event category, there are 16 possible event levels (numbered 0 through 15). Setting event reporting for a given event category to level 15 means all event reports in that category are sent to stdout; setting it to 0 means that there will be no event reports made in that category.

控制日志消息。在管理集群时,能够控制为各种事件类型发送到stdout的日志消息的数量是非常重要的。对于每个事件类别,有16个可能的事件级别(编号为0到15)。将给定事件类别的事件报告设置为级别15意味着该类别中的所有事件报告都将发送到stdout;将其设置为0意味着该类别中不会生成事件报告。

By default, only the startup message is sent to stdout, with the remaining event reporting level defaults being set to 0. The reason for this is that these messages are also sent to the management server's cluster log.

默认情况下,只有启动消息被发送到stdout,其余的事件报告级别默认设置为0。原因是这些消息也会发送到管理服务器的群集日志。

An analogous set of levels can be set for the management client to determine which event levels to record in the cluster log.

可以为管理客户端设置一组类似的级别,以确定要在集群日志中记录哪些事件级别。

  • LogLevelStartup

    日志级启动

    Table 21.125 This table provides type and value information for the LogLevelStartup data node configuration parameter

    表21.125此表提供loglevelstartup数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 1
    Range 0 - 15
    Restart Type N

    The reporting level for events generated during startup of the process.

    进程启动期间生成的事件的报告级别。

    The default level is 1.

    默认级别为1。

  • LogLevelShutdown

    日志级别关闭

    Table 21.126 This table provides type and value information for the LogLevelShutdown data node configuration parameter

    表21.126此表提供loglevelshutdown数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0 - 15
    Restart Type N

    The reporting level for events generated as part of graceful shutdown of a node.

    作为节点正常关闭的一部分生成的事件的报告级别。

    The default level is 0.

    默认级别为0。

  • LogLevelStatistic

    对数统计

    Table 21.127 This table provides type and value information for the LogLevelStatistic data node configuration parameter

    表21.127此表提供loglevelstatistic数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0 - 15
    Restart Type N

    The reporting level for statistical events such as number of primary key reads, number of updates, number of inserts, information relating to buffer usage, and so on.

    统计事件的报告级别,如主键读取次数、更新次数、插入次数、与缓冲区使用有关的信息等。

    The default level is 0.

    默认级别为0。

  • LogLevelCheckpoint

    日志级别检查点

    Table 21.128 This table provides type and value information for the LogLevelCheckpoint data node configuration parameter

    表21.128此表提供loglevelcheckpoint数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units log level
    Default 0
    Range 0 - 15
    Restart Type N

    The reporting level for events generated by local and global checkpoints.

    本地和全局检查点生成的事件的报告级别。

    The default level is 0.

    默认级别为0。

  • LogLevelNodeRestart

    日志级别节点重新启动

    Table 21.129 This table provides type and value information for the LogLevelNodeRestart data node configuration parameter

    表21.129此表提供loglevelnoderestart数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0 - 15
    Restart Type N

    The reporting level for events generated during node restart.

    节点重新启动期间生成的事件的报告级别。

    The default level is 0.

    默认级别为0。

  • LogLevelConnection

    对数连接

    Table 21.130 This table provides type and value information for the LogLevelConnection data node configuration parameter

    表21.130此表提供loglevelconnection数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0 - 15
    Restart Type N

    The reporting level for events generated by connections between cluster nodes.

    由群集节点之间的连接生成的事件的报告级别。

    The default level is 0.

    默认级别为0。

  • LogLevelError

    对数误差

    Table 21.131 This table provides type and value information for the LogLevelError data node configuration parameter

    表21.131此表提供loglevelerror数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0 - 15
    Restart Type N

    The reporting level for events generated by errors and warnings by the cluster as a whole. These errors do not cause any node failure but are still considered worth reporting.

    整个群集由错误和警告生成的事件的报告级别。这些错误不会导致任何节点故障,但仍被认为值得报告。

    The default level is 0.

    默认级别为0。

  • LogLevelCongestion

    对数级拥塞

    Table 21.132 This table provides type and value information for the LogLevelCongestion data node configuration parameter

    表21.132此表提供LogLevelColumption数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units level
    Default 0
    Range 0 - 15
    Restart Type N

    The reporting level for events generated by congestion. These errors do not cause node failure but are still considered worth reporting.

    由拥塞生成的事件的报告级别。这些错误不会导致节点故障,但仍被认为值得报告。

    The default level is 0.

    默认级别为0。

  • LogLevelInfo

    日志级别信息

    Table 21.133 This table provides type and value information for the LogLevelInfo data node configuration parameter

    表21.133此表提供loglevelinfo数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0 - 15
    Restart Type N

    The reporting level for events generated for information about the general state of the cluster.

    为获取有关群集常规状态的信息而生成的事件的报告级别。

    The default level is 0.

    默认级别为0。

  • MemReportFrequency

    memreportfrequency公司

    Table 21.134 This table provides type and value information for the MemReportFrequency data node configuration parameter

    表21.134此表提供memreportfrequency数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter controls how often data node memory usage reports are recorded in the cluster log; it is an integer value representing the number of seconds between reports.

    此参数控制在群集日志中记录数据节点内存使用情况报告的频率;它是表示报告之间的秒数的整数值。

    Each data node's data memory and index memory usage is logged as both a percentage and a number of 32 KB pages of the DataMemory and (NDB 7.5 and earlier) IndexMemory, respectively, set in the config.ini file. For example, if DataMemory is equal to 100 MB, and a given data node is using 50 MB for data memory storage, the corresponding line in the cluster log might look like this:

    每个数据节点的数据内存和索引内存使用量分别记录为config.ini文件中设置的data memory和(ndb 7.5及更早版本)indexmemory的百分比和32kb页面数。例如,如果data memory等于100 MB,并且给定的数据节点使用50 MB存储数据内存,则集群日志中的相应行可能如下所示:

    2006-12-24 01:18:16 [MgmSrvr] INFO -- Node 2: Data usage is 50%(1280 32K pages of total 2560)
    

    MemReportFrequency is not a required parameter. If used, it can be set for all cluster data nodes in the [ndbd default] section of config.ini, and can also be set or overridden for individual data nodes in the corresponding [ndbd] sections of the configuration file. The minimum value—which is also the default value—is 0, in which case memory reports are logged only when memory usage reaches certain percentages (80%, 90%, and 100%), as mentioned in the discussion of statistics events in Section 21.5.6.2, “NDB Cluster Log Events”.

    memreportfrequency不是必需的参数。如果使用,它可以在config.ini的[ndbd default]部分中为所有集群数据节点设置,也可以在配置文件的相应[ndbd]部分中为单个数据节点设置或重写。最小值(也是默认值)为0,在这种情况下,只有当内存使用率达到某些百分比(80%、90%和100%)时才会记录内存报告,如21.5.6.2节“ndb cluster log events”中有关统计事件的讨论所述。

  • StartupStatusReportFrequency

    启动状态报告频率

    Table 21.135 This table provides type and value information for the StartupStatusReportFrequency data node configuration parameter

    表21.135此表提供StartupStatusReportFrequency数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units seconds
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    When a data node is started with the --initial, it initializes the redo log file during Start Phase 4 (see Section 21.5.1, “Summary of NDB Cluster Start Phases”). When very large values are set for NoOfFragmentLogFiles, FragmentLogFileSize, or both, this initialization can take a long time.You can force reports on the progress of this process to be logged periodically, by means of the StartupStatusReportFrequency configuration parameter. In this case, progress is reported in the cluster log, in terms of both the number of files and the amount of space that have been initialized, as shown here:

    使用--initial启动数据节点时,它会在启动阶段4期间初始化重做日志文件(请参阅第21.5.1节“ndb集群启动阶段摘要”)。如果为NoOffFragmentLogFiles、FragmentLogFilesize或两者都设置了非常大的值,则此初始化可能需要很长时间。可以通过StartupStatusReportFrequency配置参数强制定期记录此进程的进度报告。在这种情况下,将在群集日志中根据已初始化的文件数和空间量报告进度,如下所示:

    2009-06-20 16:39:23 [MgmSrvr] INFO -- Node 1: Local redo log file initialization status:
    #Total files: 80, Completed: 60
    #Total MBytes: 20480, Completed: 15557
    2009-06-20 16:39:23 [MgmSrvr] INFO -- Node 2: Local redo log file initialization status:
    #Total files: 80, Completed: 60
    #Total MBytes: 20480, Completed: 15570
    

    These reports are logged each StartupStatusReportFrequency seconds during Start Phase 4. If StartupStatusReportFrequency is 0 (the default), then reports are written to the cluster log only when at the beginning and at the completion of the redo log file initialization process.

    在启动阶段4期间,每个StartupStatusReportFrequency秒都会记录这些报告。如果StartupStatusReportFrequency为0(默认值),则只有在重做日志文件初始化过程开始和完成时,才会将报告写入群集日志。

Data Node Debugging Parameters

The following parameters are intended for use during testing or debugging of data nodes, and not for use in production.

以下参数用于测试或调试数据节点,而不是用于生产。

  • DictTrace

    听写

    Table 21.136 This table provides type and value information for the DictTrace data node configuration parameter

    表21.136此表提供dicttrace数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default undefined
    Range 0 - 100
    Restart Type N

    It is possible to cause logging of traces for events generated by creating and dropping tables using DictTrace. This parameter is useful only in debugging NDB kernel code. DictTrace takes an integer value. 0 (default - no logging) and 1 (logging enabled) are the only supported values prior to NDB 7.5.2. In NDB 7.5.2 and later, setting this parameter to 2 enables logging of additional DBDICT debugging output (Bug #20368450).

    对于使用dicttrace创建和删除表生成的事件,可能会导致记录跟踪。此参数仅在调试ndb内核代码时有用。dicttrace接受整数值。0(默认-无日志记录)和1(启用日志记录)是ndb 7.5.2之前唯一受支持的值。在ndb 7.5.2及更高版本中,将此参数设置为2可以记录其他dbdict调试输出(bug 20368450)。

  • WatchdogImmediateKill

    看门狗

    Table 21.137 This table provides type and value information for the WatchDogImmediateKill data node configuration parameter

    表21.137此表提供WatchDogImmediateKill数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.7
    Type or units boolean
    Default false
    Range true, false
    Restart Type S

    In NDB 7.6.7 and later, you can cause threads to be killed immediately whenever watchdog issues occur by enabling the WatchdogImmediateKill data node configuration parameter. This parameter should be used only when debugging or troubleshooting, to obtain trace files reporting exactly what was occurring the instant that execution ceased.

    在ndb 7.6.7和更高版本中,通过启用watchdogimmediatekill数据节点配置参数,可以在出现监视程序问题时立即导致线程被终止。只有在调试或故障排除时才应使用此参数,以便获取跟踪文件,准确报告执行停止时发生的情况。

Backup parameters.  The [ndbd] parameters discussed in this section define memory buffers set aside for execution of online backups.

备份参数。本节讨论的[ndbd]参数定义了为执行联机备份而预留的内存缓冲区。

  • BackupDataBufferSize

    backUpdateBufferSize

    Table 21.138 This table provides type and value information for the BackupDataBufferSize data node configuration parameter

    表21.138此表提供backupdateabufferSize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 16M
    Range 512K - 4294967039 (0xFFFFFEFF)
    Restart Type N
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 16M
    Range 2M - 4294967039 (0xFFFFFEFF)
    Restart Type N
    Version (or later) NDB 7.5.1
    Type or units bytes
    Default 16M
    Range 512K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    In creating a backup, there are two buffers used for sending data to the disk. The backup data buffer is used to fill in data recorded by scanning a node's tables. Once this buffer has been filled to the level specified as BackupWriteSize, the pages are sent to disk. While flushing data to disk, the backup process can continue filling this buffer until it runs out of space. When this happens, the backup process pauses the scan and waits until some disk writes have completed freeing up memory so that scanning may continue.

    在创建备份时,有两个缓冲区用于向磁盘发送数据。备份数据缓冲区用于填充通过扫描节点表记录的数据。一旦这个缓冲区被填充到指定的backupwritesize级别,页面就被发送到磁盘。在将数据刷新到磁盘时,备份进程可以继续填充此缓冲区,直到它耗尽空间。当发生这种情况时,备份进程将暂停扫描并等待,直到某些磁盘写入完成释放内存,以便可以继续扫描。

    The default value for this parameter is 16MB. The minimum was changed from 2M to 512K in NDB 7.5.1. (Bug #22749509)

    此参数的默认值为16MB。在ndb 7.5.1中,最小值从2m变为512k。(错误22749509)

  • BackupDiskWriteSpeedPct

    备份磁盘writespeedpct

    Table 21.139 This table provides type and value information for the BackupDiskWriteSpeedPct data node configuration parameter

    表21.139此表提供backupdiskwritespeedpct数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units percent
    Default 50
    Range 0 - 90
    Restart Type N

    During normal operation, data nodes attempt to maximize the disk write speed used for local checkpoints and backups while remaining within the bounds set by MinDiskWriteSpeed and MaxDiskWriteSpeed. Disk write throttling gives each LDM thread an equal share of the total budget. This allows parallel LCPs to take place without exceeding the disk I/O budget. Because a backup is executed by only one LDM thread, this effectively caused a budget cut, resulting in longer backup completion times, and—if the rate of change is sufficiently high—in failure to complete the backup when the backup log buffer fill rate is higher than the achievable write rate.

    在正常操作期间,数据节点试图最大化用于本地检查点和备份的磁盘写入速度,同时保留在由MinDiskWriteSpeed和Max DISKReuleSPEPED设置的边界内。磁盘写限制使每个ldm线程在总预算中有相等的份额。这允许在不超过磁盘I/O预算的情况下进行并行LCP。由于一个备份只由一个ldm线程执行,这实际上导致了预算削减,导致备份完成时间延长,并且如果在备份日志缓冲区填充率高于可实现的写入率时,更改率足够高而无法完成备份。

    This problem can be addressed by using the BackupDiskWriteSpeedPct configuration parameter, which takes a value in the range 0-90 (inclusive) which is interpreted as the percentage of the node's maximum write rate budget that is reserved prior to sharing out the remainder of the budget among LDM threads for LCPs. The LDM thread running the backup receives the whole write rate budget for the backup, plus its (reduced) share of the write rate budget for local checkpoints. (This makes the disk write rate budget behave similarly to how it was handled in NDB Cluster 7.3 and earlier.)

    这个问题可以通过使用BuffUpDekWrestEPEDPCT配置参数来解决,该配置参数取0到90(包含)范围内的值,该值被解释为在LCPS的LDM线程之间共享预算其余部分之前保留的节点的最大写入速率预算的百分比。运行备份的ldm线程接收备份的整个写速率预算,以及它在本地检查点的写速率预算中(减少的)份额。(这使得磁盘写速率预算的行为与在ndb cluster 7.3和更早版本中处理它的方式类似。)

    The default value for this parameter is 50 (interpreted as 50%).

    此参数的默认值为50(解释为50%)。

  • BackupLogBufferSize

    BackupLogBufferSize

    Table 21.140 This table provides type and value information for the BackupLogBufferSize data node configuration parameter

    表21.140此表提供backuplogbuffersize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 16M
    Range 2M - 4294967039 (0xFFFFFEFF)
    Restart Type N

    The backup log buffer fulfills a role similar to that played by the backup data buffer, except that it is used for generating a log of all table writes made during execution of the backup. The same principles apply for writing these pages as with the backup data buffer, except that when there is no more space in the backup log buffer, the backup fails. For that reason, the size of the backup log buffer must be large enough to handle the load caused by write activities while the backup is being made. See Section 21.5.3.3, “Configuration for NDB Cluster Backups”.

    备份日志缓冲区执行与备份数据缓冲区类似的角色,但它用于生成备份执行期间所做的所有表写入的日志。与使用备份数据缓冲区写入这些页的原理相同,只是当备份日志缓冲区中没有更多空间时,备份将失败。因此,备份日志缓冲区的大小必须足够大,以便在进行备份时处理由写入活动引起的负载。参见第21.5.3.3节“ndb群集备份配置”。

    The default value for this parameter should be sufficient for most applications. In fact, it is more likely for a backup failure to be caused by insufficient disk write speed than it is for the backup log buffer to become full. If the disk subsystem is not configured for the write load caused by applications, the cluster is unlikely to be able to perform the desired operations.

    对于大多数应用程序,此参数的默认值应该足够。事实上,与备份日志缓冲区已满相比,更可能是磁盘写入速度不足导致备份失败。如果磁盘子系统没有针对应用程序导致的写入负载进行配置,则群集不太可能执行所需的操作。

    It is preferable to configure cluster nodes in such a manner that the processor becomes the bottleneck rather than the disks or the network connections.

    最好以这样的方式配置集群节点:处理器成为瓶颈,而不是磁盘或网络连接。

    The default value for this parameter is 16MB.

    此参数的默认值为16MB。

  • BackupMemory

    备份内存

    Table 21.141 This table provides type and value information for the BackupMemory data node configuration parameter

    表21.141此表提供了backupmemory数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 32M
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter is deprecated, and subject to removal in a future version of NDB Cluster. Any setting made for it is ignored.

    此参数已弃用,并将在未来版本的ndb集群中删除。为其设置的任何设置都将被忽略。

  • BackupReportFrequency

    备份端口频率

    Table 21.142 This table provides type and value information for the BackupReportFrequency data node configuration parameter

    表21.142此表提供backupreportfrequency数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units seconds
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter controls how often backup status reports are issued in the management client during a backup, as well as how often such reports are written to the cluster log (provided cluster event logging is configured to permit it—see Logging and checkpointing). BackupReportFrequency represents the time in seconds between backup status reports.

    此参数控制备份期间在管理客户端中发出备份状态报告的频率,以及将此类报告写入群集日志的频率(前提是群集事件日志记录配置为允许它查看日志记录和检查点)。backupreportfrequency表示备份状态报告之间的时间(秒)。

    The default value is 0.

    默认值为0。

  • BackupWriteSize

    背向标准尺寸

    Table 21.143 This table provides type and value information for the BackupWriteSize data node configuration parameter

    表21.143此表提供backupwritesize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 256K
    Range 32K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter specifies the default size of messages written to disk by the backup log and backup data buffers.

    此参数指定备份日志和备份数据缓冲区写入磁盘的消息的默认大小。

    The default value for this parameter is 256KB.

    此参数的默认值为256KB。

  • BackupMaxWriteSize

    备份MaxWriteSize

    Table 21.144 This table provides type and value information for the BackupMaxWriteSize data node configuration parameter

    表21.144此表提供backupMaxWriteSize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 1M
    Range 256K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter specifies the maximum size of messages written to disk by the backup log and backup data buffers.

    此参数指定由备份日志和备份数据缓冲区写入磁盘的消息的最大大小。

    The default value for this parameter is 1MB.

    此参数的默认值为1MB。

Note

The location of the backup files is determined by the BackupDataDir data node configuration parameter.

备份文件的位置由backUpdateDir数据节点配置参数确定。

Additional requirements.  When specifying these parameters, the following relationships must hold true. Otherwise, the data node will be unable to start.

附加要求。当指定这些参数时,以下关系必须为真。否则,数据节点将无法启动。

  • BackupDataBufferSize >= BackupWriteSize + 188KB

    backUpdateBufferSize>=backupWriteSize+188KB

  • BackupLogBufferSize >= BackupWriteSize + 16KB

    backupLogBufferSize>=backupWriteSize+16KB

  • BackupMaxWriteSize >= BackupWriteSize

    backupMaxWriteSize>=backupWriteSize

NDB Cluster Realtime Performance Parameters

The [ndbd] parameters discussed in this section are used in scheduling and locking of threads to specific CPUs on multiprocessor data node hosts.

本节讨论的[ndbd]参数用于调度和锁定多处理器数据节点主机上特定cpu的线程。

Note

To make use of these parameters, the data node process must be run as system root.

要使用这些参数,数据节点进程必须作为系统根运行。

  • LockExecuteThreadToCPU

    LockExecutethReadToCPU

    Table 21.145 This table provides type and value information for the LockExecuteThreadToCPU data node configuration parameter

    表21.145此表提供lockeExecutethReadToCPU数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units set of CPU IDs
    Default 0
    Range ...
    Restart Type N

    When used with ndbd, this parameter (now a string) specifies the ID of the CPU assigned to handle the NDBCLUSTER execution thread. When used with ndbmtd, the value of this parameter is a comma-separated list of CPU IDs assigned to handle execution threads. Each CPU ID in the list should be an integer in the range 0 to 65535 (inclusive).

    与ndbd一起使用时,此参数(现在是字符串)指定分配给处理ndbcluster执行线程的cpu的id。当与ndbmtd一起使用时,此参数的值是一个逗号分隔的cpu id列表,分配给处理执行线程。列表中的每个CPU ID都应该是0到65535(包括0和65535)之间的整数。

    The number of IDs specified should match the number of execution threads determined by MaxNoOfExecutionThreads. However, there is no guarantee that threads are assigned to CPUs in any given order when using this parameter. You can obtain more finely-grained control of this type using ThreadConfig.

    指定的ID数应与MaxNoofExecutionThreads确定的执行线程数匹配。但是,在使用此参数时,不能保证按任何给定顺序将线程分配给CPU。您可以使用threadconfig获得这种类型更细粒度的控制。

    LockExecuteThreadToCPU has no default value.

    lockExecutethReadToCPU没有默认值。

  • LockMaintThreadsToCPU

    lockmaintthreadstocpu

    Table 21.146 This table provides type and value information for the LockMaintThreadsToCPU data node configuration parameter

    表21.146此表提供lockmaintthreadstocpu数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units CPU ID
    Default 0
    Range 0 - 64K
    Restart Type N

    This parameter specifies the ID of the CPU assigned to handle NDBCLUSTER maintenance threads.

    此参数指定分配给处理ndbcluster维护线程的cpu的id。

    The value of this parameter is an integer in the range 0 to 65535 (inclusive). There is no default value.

    此参数的值是0到65535(含)范围内的整数。没有默认值。

  • RealtimeScheduler

    实时调度程序

    Table 21.147 This table provides type and value information for the RealtimeScheduler data node configuration parameter

    表21.147此表提供RealTimeScheduler数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default false
    Range true, false
    Restart Type N

    Setting this parameter to 1 enables real-time scheduling of data node threads.

    将此参数设置为1将启用数据节点线程的实时调度。

    The default is 0 (scheduling disabled).

    默认值为0(已禁用计划)。

  • SchedulerExecutionTimer

    调度执行计时器

    Table 21.148 This table provides type and value information for the SchedulerExecutionTimer data node configuration parameter

    表21.148此表提供ScheduleExecutionTimer数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units µs
    Default 50
    Range 0 - 11000
    Restart Type N

    This parameter specifies the time in microseconds for threads to be executed in the scheduler before being sent. Setting it to 0 minimizes the response time; to achieve higher throughput, you can increase the value at the expense of longer response times.

    此参数指定线程在发送之前在调度程序中执行的时间(微秒)。将其设置为0可以最小化响应时间;要获得更高的吞吐量,可以以牺牲更长的响应时间为代价增加值。

    The default is 50 μsec, which our testing shows to increase throughput slightly in high-load cases without materially delaying requests.

    默认值是50μs,我们的测试表明,在高负载情况下,这会略微提高吞吐量,而不会实质性地延迟请求。

  • SchedulerResponsiveness

    计划响应性

    Table 21.149 This table provides type and value information for the SchedulerResponsiveness data node configuration parameter

    表21.149此表提供SchedulerResponsibility数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 5
    Range 0 - 10
    Restart Type S

    Set the balance in the NDB scheduler between speed and throughput. This parameter takes an integer whose value is in the range 0-10 inclusive, with 5 as the default. Higher values provide better response times relative to throughput. Lower values provide increased throughput at the expense of longer response times.

    在ndb调度程序中设置速度和吞吐量之间的平衡。此参数接受一个整数,其值在0-10(包括0-10)范围内,默认值为5。较高的值相对于吞吐量提供了更好的响应时间。较低的值以较长的响应时间为代价提供了更高的吞吐量。

  • SchedulerSpinTimer

    调度程序PinTimer

    Table 21.150 This table provides type and value information for the SchedulerSpinTimer data node configuration parameter

    表21.150此表提供SchedulerSpinTimer数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units µs
    Default 0
    Range 0 - 500
    Restart Type N

    This parameter specifies the time in microseconds for threads to be executed in the scheduler before sleeping.

    此参数指定在睡眠前在调度程序中执行线程的时间(微秒)。

    The default value is 0.

    默认值为0。

  • BuildIndexThreads

    buildIndexThreads

    Table 21.151 This table provides type and value information for the BuildIndexThreads data node configuration parameter

    表21.151此表提供BuildIndexThreads数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 0
    Range 0 - 128
    Restart Type S
    Version (or later) NDB 7.6.4
    Type or units numeric
    Default 128
    Range 0 - 128
    Restart Type S

    This parameter determines the number of threads to create when rebuilding ordered indexes during a system or node start, as well as when running ndb_restore --rebuild-indexes. It is supported only when there is more than one fragment for the table per data node (for example, when COMMENT="NDB_TABLE=PARTITION_BALANCE=FOR_RA_BY_LDM_X_2" is used with CREATE TABLE).

    此参数确定在系统或节点启动期间重建有序索引以及运行ndb_restore--rebuild index时要创建的线程数。只有当每个数据节点的表有一个以上的片段时才支持它(例如,当注释=“NdByTabe= DealthyBalth= FosiRayByLyLMyxx2”用于创建表时)。

    Setting this parameter to 0 (the default) disables multithreaded building of ordered indexes.

    将此参数设置为0(默认值)将禁用多线程生成有序索引。

    This parameter is supported when using ndbd or ndbmtd.

    使用ndbd或ndbmtd时支持此参数。

    You can enable multithreaded builds during data node initial restarts by setting the TwoPassInitialNodeRestartCopy data node configuration parameter to TRUE.

    通过将twopassinitialnoderestartcopy数据节点配置参数设置为true,可以在数据节点初始重新启动期间启用多线程生成。

  • TwoPassInitialNodeRestartCopy

    TwoPassInitialNoderStart副本

    Table 21.152 This table provides type and value information for the TwoPassInitialNodeRestartCopy data node configuration parameter

    表21.152此表提供了TwoPassInitialNoderStartCopy数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default false
    Range true, false
    Restart Type N
    Version (or later) NDB 7.6.4
    Type or units boolean
    Default true
    Range true, false
    Restart Type N

    Multithreaded building of ordered indexes can be enabled for initial restarts of data nodes by setting this configuration parameter to true, which enables two-pass copying of data during initial node restarts. Beginning with NDB 7.6.4, this is the default value (Bug #26704312, Bug #27109117).

    通过将此配置参数设置为true,可以为数据节点的初始重新启动启用有序索引的多线程构建,从而在初始节点重新启动期间启用数据的两次传递复制。从ndb 7.6.4开始,这是默认值(错误26704312,错误27109117)。

    You must also set BuildIndexThreads to a nonzero value.

    还必须将buildIndexThreads设置为非零值。

  • Numa

    努玛

    Table 21.153 This table provides type and value information for the Numa data node configuration parameter

    表21.153此表提供NUMA数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 1
    Range ...
    Restart Type N

    This parameter determines whether Non-Uniform Memory Access (NUMA) is controlled by the operating system or by the data node process, whether the data node uses ndbd or ndbmtd. By default, NDB attempts to use an interleaved NUMA memory allocation policy on any data node where the host operating system provides NUMA support.

    此参数确定非统一内存访问(NUMA)是由操作系统控制还是由数据节点进程控制,数据节点是使用ndbd还是ndbmtd。默认情况下,ndb尝试在主机操作系统提供numa支持的任何数据节点上使用交织numa内存分配策略。

    Setting Numa = 0 means that the datanode process does not itself attempt to set a policy for memory allocation, and permits this behavior to be determined by the operating system, which may be further guided by the separate numactl tool. That is, Numa = 0 yields the system default behavior, which can be customised by numactl. For many Linux systems, the system default behavior is to allocate socket-local memory to any given process at allocation time. This can be problematic when using ndbmtd; this is because nbdmtd allocates all memory at startup, leading to an imbalance, giving different access speeds for different sockets, especially when locking pages in main memory.

    设置numa=0意味着datanode进程本身不会尝试设置内存分配策略,并允许操作系统确定此行为,这可能由单独的numactl工具进一步指导。也就是说,numa=0产生系统默认行为,可以由numactl定制。对于许多linux系统,系统默认行为是在分配时为任何给定进程分配socket本地内存。使用ndbmtd时可能会出现问题;这是因为nbdmtd在启动时分配所有内存,导致不平衡,为不同的套接字提供不同的访问速度,特别是在锁定主内存中的页时。

    Setting Numa = 1 means that the data node process uses libnuma to request interleaved memory allocation. (This can also be accomplished manually, on the operating system level, using numactl.) Using interleaved allocation in effect tells the data node process to ignore non-uniform memory access but does not attempt to take any advantage of fast local memory; instead, the data node process tries to avoid imbalances due to slow remote memory. If interleaved allocation is not desired, set Numa to 0 so that the desired behavior can be determined on the operating system level.

    设置numa=1意味着数据节点进程使用libnuma请求交错内存分配。(这也可以使用numactl在操作系统级别手动完成。)实际上,使用交错分配会告诉数据节点进程忽略非均匀内存访问,但不会尝试利用快速本地内存;相反,数据节点进程会尝试避免由于远程内存慢而导致的不平衡。如果不需要交错分配,请将numa设置为0,以便在操作系统级别上确定所需的行为。

    The Numa configuration parameter is supported only on Linux systems where libnuma.so is available.

    只有libnuma.so可用的Linux系统才支持NUMA配置参数。

Multi-Threading Configuration Parameters (ndbmtd).  ndbmtd runs by default as a single-threaded process and must be configured to use multiple threads, using either of two methods, both of which require setting configuration parameters in the config.ini file. The first method is simply to set an appropriate value for the MaxNoOfExecutionThreads configuration parameter. A second method, makes it possible to set up more complex rules for ndbmtd multithreading using ThreadConfig. The next few paragraphs provide information about these parameters and their use with multithreaded data nodes.

多线程配置参数(ndbmtd)。默认情况下,ndbmtd作为单线程进程运行,必须使用两种方法之一配置为使用多个线程,这两种方法都需要在config.ini文件中设置配置参数。第一种方法只是为maxNoofExecutionThreads配置参数设置适当的值。第二种方法,可以使用threadconfig为ndbmtd多线程设置更复杂的规则。接下来的几段将介绍这些参数及其在多线程数据节点中的使用。

  • MaxNoOfExecutionThreads

    最大执行次数

    Table 21.154 This table provides type and value information for the MaxNoOfExecutionThreads multi-threaded data node configuration parameter

    表21.154此表提供maxNoofExecutionThreads多线程数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 2
    Range 2 - 72
    Restart Type IS

    This parameter directly controls the number of execution threads used by ndbmtd, up to a maximum of 72. Although this parameter is set in [ndbd] or [ndbd default] sections of the config.ini file, it is exclusive to ndbmtd and does not apply to ndbd.

    此参数直接控制NDMTD使用的执行线程数,最多为72。尽管此参数在config.ini文件的[ndbd]或[ndbd default]部分中设置,但它是ndbmtd专有的,不适用于ndbd。

    Setting MaxNoOfExecutionThreads sets the number of threads for each type as determined by a matrix in the file storage/ndb/src/kernel/vm/mt_thr_config.cpp. This table shows these numbers of threads for possible values of MaxNoOfExecutionThreads.

    设置maxNoofExecutionThreads设置由文件storage/ndb/src/kernel/vm/mt_thr_config.cpp中的矩阵确定的每种类型的线程数。此表显示maxNoofExecutionThreads可能值的线程数。

    Table 21.155 MaxNoOfExecutionThreads values and the corresponding number of threads by thread type (LQH, TC, Send, Receive).

    表21.155 maxNoofExecutionThreads按线程类型(lqh、tc、send、receive)列出值和相应的线程数。

    MaxNoOfExecutionThreads Value LDM Threads TC Threads Send Threads Receive Threads
    0 .. 3 1 0 0 1
    4 .. 6 2 0 0 1
    7 .. 8 4 0 0 1
    9 4 2 0 1
    10 4 2 1 1
    11 4 3 1 1
    12 6 2 1 1
    13 6 3 1 1
    14 6 3 1 2
    15 6 3 2 2
    16 8 3 1 2
    17 8 4 1 2
    18 8 4 2 2
    19 8 5 2 2
    20 10 4 2 2
    21 10 5 2 2
    22 10 5 2 3
    23 10 6 2 3
    24 12 5 2 3
    25 12 6 2 3
    26 12 6 3 3
    27 12 7 3 3
    28 12 7 3 4
    29 12 8 3 4
    30 12 8 4 4
    31 12 9 4 4
    32 16 8 3 3
    33 16 8 3 4
    34 16 8 4 4
    35 16 9 4 4
    36 16 10 4 4
    37 16 10 4 5
    38 16 11 4 5
    39 16 11 5 5
    40 20 10 4 4
    41 20 10 4 5
    42 20 11 4 5
    43 20 11 5 5
    44 20 12 5 5
    45 20 12 5 6
    46 20 13 5 6
    47 20 13 6 6
    48 24 12 5 5
    49 24 12 5 6
    50 24 13 5 6
    51 24 13 6 6
    52 24 14 6 6
    53 24 14 6 7
    54 24 15 6 7
    55 24 15 7 7
    56 24 16 7 7
    57 24 16 7 8
    58 24 17 7 8
    59 24 17 8 8
    60 24 18 8 8
    61 24 18 8 9
    62 24 19 8 9
    63 24 19 9 9
    64 32 16 7 7
    65 32 16 7 8
    66 32 17 7 8
    67 32 17 8 8
    68 32 18 8 8
    69 32 18 8 9
    70 32 19 8 9
    71 32 20 8 9
    72 32 20 8 10

    There is always one SUMA (replication) thread.

    总是有一个suma(复制)线程。

    NoOfFragmentLogParts should be set equal to the number of LDM threads used by ndbmtd, as determined by the setting for this parameter. This ratio should not be any greater than 4:1; beginning with NDB 7.5.7 and NDB 7.6.3, a configuration in which this is the case is specifically disallowed. (Bug #25333414)

    noofframgentlogparts应设置为等于ndbmtd使用的ldm线程数,由此参数的设置确定。这个比率不应该大于4:1;从ndb 7.5.7和ndb7.6.3开始,这种情况是特别不允许的。(错误2533414)

    The number of LDM threads also determines the number of partitions used by an NDB table that is not explicitly partitioned; this is the number of LDM threads times the number of data nodes in the cluster. (If ndbd is used on the data nodes rather than ndbmtd, then there is always a single LDM thread; in this case, the number of partitions created automatically is simply equal to the number of data nodes. See Section 21.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”, for more information.

    ldm线程数还决定了未显式分区的ndb表使用的分区数;这是ldm线程数乘以集群中的数据节点数。(如果在数据节点上使用了ndbd而不是ndbmtd,那么总是有一个ldm线程;在这种情况下,自动创建的分区的数量仅仅等于数据节点的数量。有关更多信息,请参阅21.1.2节,“ndb群集节点、节点组、副本和分区”。

    Adding large tablespaces for Disk Data tables when using more than the default number of LDM threads may cause issues with resource and CPU usage if the disk page buffer is insufficiently large; see the description of the DiskPageBufferMemory configuration parameter, for more information.

    如果磁盘页缓冲区不够大,则在使用超过默认LDM线程数时为磁盘数据表添加大表空间可能会导致资源和CPU使用问题;有关详细信息,请参阅diskPageBufferMemory配置参数的说明。

    The thread types are described later in this section (see ThreadConfig).

    线程类型将在本节后面描述(请参阅threadconfig)。

    Setting this parameter outside the permitted range of values causes the management server to abort on startup with the error Error line number: Illegal value value for parameter MaxNoOfExecutionThreads.

    如果将此参数设置在允许的值范围之外,则管理服务器将在启动时中止,错误为:参数maxNoofExecutionThreads的值非法。

    For MaxNoOfExecutionThreads, a value of 0 or 1 is rounded up internally by NDB to 2, so that 2 is considered this parameter's default and minimum value.

    对于maxNoofExecutionThreads,ndb会在内部将0或1的值四舍五入为2,因此2被视为此参数的默认值和最小值。

    MaxNoOfExecutionThreads is generally intended to be set equal to the number of CPU threads available, and to allocate a number of threads of each type suitable to typical workloads. It does not assign particular threads to specified CPUs. For cases where it is desirable to vary from the settings provided, or to bind threads to CPUs, you should use ThreadConfig instead, which allows you to allocate each thread directly to a desired type, CPU, or both.

    maxNoofExecutionThreads通常被设置为等于可用的CPU线程数,并分配适合于典型工作负载的每种类型的线程数。它不会将特定线程分配给指定的CPU。对于需要改变所提供的设置或将线程绑定到CPU的情况,您应该改用threadconfig,它允许您将每个线程直接分配到所需的类型、CPU或两者。

    The multithreaded data node process always spawns, at a minimum, the threads listed here:

    多线程数据节点进程总是至少生成下面列出的线程:

    • 1 local query handler (LDM) thread

      1个本地查询处理程序(LDM)线程

    • 1 receive thread

      1个接收线程

    • 1 subscription manager (SUMA or replication) thread

      1个订阅管理器(SUMA或复制)线程

    For a MaxNoOfExecutionThreads value of 8 or less, no TC threads are created, and TC handling is instead performed by the main thread.

    对于maxNoofExecutionThreads值为8或更小的值,不会创建TC线程,而是由主线程执行TC处理。

    Prior to NDB 7.6, changing the number of LDM threads always requires a system restart, whether it is changed using this parameter or ThreadConfig. In NDB 7.6 and later it is possible to effect the change using a node initial restart (NI) provided the following conditions are met:

    在ndb 7.6之前,更改ldm线程数总是需要重新启动系统,无论是使用此参数更改还是使用threadconfig更改。在ndb 7.6和更高版本中,只要满足以下条件,就可以使用节点初始重启(ni)来实现更改:

    • If, following the change, the number of LDM threads remains the same as before, nothing more than a simple node restart (rolling restart, or N) is required to implement the change.

      如果在更改之后,ldm线程的数量保持不变,则只需要简单的节点重新启动(滚动重新启动,或n)即可实现更改。

    • Otherwise (that is, if the number of LDM threads changes), it is still possible to effect the change using a node initial restart (NI) provided the following two conditions are met:

      否则(也就是说,如果ldm线程的数量改变了),只要满足以下两个条件,仍然可以使用节点初始重新启动(ni)来影响改变:

      1. Each LDM thread handles a maximum of 8 fragments, and

        每个LDM线程最多处理8个片段,以及

      2. The total number of table fragments is an integer multiple of the number of LDM threads.

        表片段的总数是LDM线程数的整数倍。

    Prior to NDB 7.6, if the cluster's IndexMemory usage is greater than 50%, changing this requires an initial restart of the cluster. (A maximum of 30-35% IndexMemory usage is recommended in such cases.) Otherwise, resource usage and LDM thread allocation cannot be balanced between nodes, which can result in underutilized and overutilized LDM threads, and ultimately data node failures. In NDB 7.6 and later, an initial restart is not required to effect a change in this parameter.

    在ndb 7.6之前,如果集群的indexmemory使用率大于50%,则更改此设置需要集群的初始重新启动。(在这种情况下,建议使用最多30%-35%的索引内存。)否则,资源使用和LDM线程分配不能在节点之间进行平衡,这可能导致未充分利用和过度利用LDM线程,最终导致数据节点故障。在ndb 7.6和更高版本中,不需要初始重新启动就可以更改此参数。

  • NoOfFragmentLogParts

    无碎片零件

    Table 21.156 This table provides type and value information for the NoOfFragmentLogParts multi-threaded data node configuration parameter

    表21.156此表提供noofffragmentlogparts多线程数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 4
    Range 4, 8, 12, 16, 24, 32
    Restart Type IN

    Set the number of log file groups for redo logs belonging to this ndbmtd. The maximum value is 32; the value set must be an even multiple of 4.

    设置属于此ndbmtd的重做日志的日志文件组数。最大值为32;值集必须是4的偶数倍数。

    NoOfFragmentLogParts should be set equal to the number of LDM threads used by ndbmtd as determined by the setting for MaxNoOfExecutionThreads. Beginning with NDB 7.5.7 and NDB 7.6.3, a configuration using more than 4 redo log parts per LDM is disallowed. (Bug #25333414)

    NoOffFragmentLogParts应设置为等于由MaxNoofExecutionThreads的设置确定的ndbmtd使用的LDM线程数。从ndb 7.5.7和ndb7.6.3开始,不允许每个ldm使用超过4个重做日志部分的配置。(错误2533414)

    See the description of MaxNoOfExecutionThreads for more information.

    有关详细信息,请参阅MaxNoofExecutionThreads的说明。

  • ThreadConfig

    线程配置

    Table 21.157 This table provides type and value information for the ThreadConfig multi-threaded data node configuration parameter

    表21.157此表提供threadconfig多线程数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units string
    Default ''
    Range ...
    Restart Type IS

    This parameter is used with ndbmtd to assign threads of different types to different CPUs. Its value is a string whose format has the following syntax:

    此参数与ndbmtd一起用于将不同类型的线程分配给不同的cpu。其值是一个字符串,其格式具有以下语法:

    ThreadConfig := entry[,entry[,...]]
    
    entry := type={param[,param[,...]]}
    
    type := ldm | main | recv | send | rep | io | tc | watchdog | idxbld
    
    param := count=number
      | cpubind=cpu_list
      | cpuset=cpu_list
      | spintime=number
      | realtime={0|1}
      | nosend={0|1}
      | thread_prio={0..10}
      | cpubind_exclusive=cpu_list
      | cpuset_exclusive=cpu_list
    

    The curly braces ({...}) surrounding the list of parameters are required, even if there is only one parameter in the list.

    参数列表周围的大括号({…})是必需的,即使列表中只有一个参数。

    A param (parameter) specifies any or all of the following information:

    param(参数)指定以下任何或所有信息:

    • The number of threads of the given type (count).

      给定类型的线程数(计数)。

    • The set of CPUs to which the threads of the given type are to be nonexclusively bound. This is determined by either one of cpubind or cpuset). cpubind causes each thread to be bound (nonexclusively) to a CPU in the set; cpuset means that each thread is bound (nonexclusively) to the set of CPUs specified.

      给定类型的线程将非独占地绑定到的CPU集。这由cpubind或cpuset中的一个决定)。cpubind使每个线程(非独占地)绑定到集合中的cpu;cpu set表示每个线程(非独占地)绑定到指定的cpu集合。

      On Solaris, you can instead specify a set of CPUs to which the threads of the given type are to be bound exclusively. cpubind_exclusive causes each thread to be bound exclusively to a CPU in the set; cpuset_exclsuive means that each thread is bound exclusively to the set of CPUs specified.

      在solaris上,您可以指定一组cpu,给定类型的线程将以独占方式绑定到该cpu。cpubind_exclusive导致每个线程以独占方式绑定到集合中的一个cpu;cpu set_exclusive意味着每个线程以独占方式绑定到指定的cpu集合。

      Only one of cpubind, cpuset, cpubind_exclusive, or cpuset_exclusive can be provided in a single configuration.

      在单个配置中只能提供cpubind、cpuset、cpubind_exclusive或cpuset_exclusive中的一个。

    • spintime determines the wait time in microseconds the thread spins before going to sleep.

      spintime确定线程在进入睡眠前旋转的等待时间(以微秒为单位)。

      The default value for spintime is the value of the SchedulerSpinTimer data node configuration parameter.

      spintime的默认值是schedulerspintimer数据节点配置参数的值。

      spintime does not apply to I/O threads, watchdog, or offline index build threads, and so cannot be set for these thread types.

      自旋时间不适用于I/O线程、看门狗或脱机索引生成线程,因此无法为这些线程类型设置。

    • realtime can be set to 0 or 1. If it is set to 1, the threads run with real-time priority. This also means that thread_prio cannot be set.

      实时可以设置为0或1。如果设置为1,则线程以实时优先级运行。这也意味着无法设置线程优先级。

      The realtime parameter is set by default to the value of the RealtimeScheduler data node configuration parameter.

      realtime参数默认设置为realtimescheduler数据节点配置参数的值。

      realtime cannot be set for offline index build threads.

      无法为脱机索引生成线程设置实时。

    • By setting nosend to 1, you can prevent a main, ldm, rep, or tc thread from assisting the send threads. This parameter is 0 by default, and cannot be used with other types of threads.

      通过将nosend设置为1,可以防止main、ldm、rep或tc线程协助发送线程。此参数默认为0,不能与其他类型的线程一起使用。

      nosend was added in NDB 7.6.4.

      在ndb 7.6.4中添加了nosend。

    • thread_prio is a thread priority level that can be set from 0 to 10, with 10 representing the greatest priority. The default is 5. The precise effects of this parameter are platform-specific, and are described later in this section.

      thread_prio是一个线程优先级,可以设置为0到10,其中10代表最大优先级。默认值为5。此参数的精确效果是特定于平台的,将在本节后面介绍。

      The thread priority level cannot be set for offline index build threads.

      无法为脱机索引生成线程设置线程优先级。

    thread_prio settings and effects by platform.  The implementation of thread_prio differs between Linux/FreeBSD, Solaris, and Windows. In the following list, we discuss its effects on each of these platforms in turn:

    线程优先级设置和平台效果。线程优先级的实现在linux/freebsd、solaris和windows中有所不同。在下面的列表中,我们依次讨论它对这些平台的影响:

    • Linux and FreeBSD: We map thread_prio to a value to be supplied to the nice system call. Since a lower niceness value for a process indicates a higher process priority, increasing thread_prio has the effect of lowering the nice value.

      linux和freebsd:我们将thread_prio映射到一个要提供给nice系统调用的值。由于进程的niceness值较低表示进程优先级较高,因此增加thread prio会降低nice值。

      Table 21.158 Mapping of thread_prio to nice values on Linux and FreeBSD

      表21.158将线程prio映射到linuxand freebsd上的nice值

      thread_prio value nice value
      0 19
      1 16
      2 12
      3 8
      4 4
      5 0
      6 -4
      7 -8
      8 -12
      9 -16
      10 -20

      Some operating systems may provide for a maximum process niceness level of 20, but this is not supported by all targeted versions; for this reason, we choose 19 as the maximum nice value that can be set.

      一些操作系统可以提供最大的过程优良度20,但这不是所有目标版本所支持的;为此,我们选择19作为可设置的最大尼斯值。

    • Solaris: Setting thread_prio on Solaris sets the Solaris FX priority, with mappings as shown in the following table:

      solaris:在solaris上设置thread prio设置solaris fx优先级,映射如下表所示:

      Table 21.159 Mapping of thread_prio to FX priority on Solaris

      表21.159线程优先级到FX优先级的太阳系映射

      thread_prio value Solaris FX priority
      0 15
      1 20
      2 25
      3 30
      4 35
      5 40
      6 45
      7 50
      8 55
      9 59
      10 60

      A thread_prio setting of 9 is mapped on Solaris to the special FX priority value 59, which means that the operating system also attempts to force the thread to run alone on its own CPU core.

      在solaris上,thread_prio设置9被映射到特殊的fx priority值59,这意味着操作系统还试图强制线程在自己的cpu核心上单独运行。

    • Windows: We map thread_prio to a Windows thread priority value passed to the Windows API SetThreadPriority() function. This mapping is shown in the following table:

      windows:我们将thread prio映射到传递给windows api setthreadpriority()函数的windows线程优先级值。下表显示了此映射:

      Table 21.160 Mapping of thread_prio to Windows thread priority

      表21.160线程优先级到Windows线程优先级的映射

      thread_prio value Windows thread priority
      0 - 1 THREAD_PRIORITY_LOWEST
      2 - 3 THREAD_PRIORITY_BELOW_NORMAL
      4 - 5 THREAD_PRIORITY_NORMAL
      6 - 7 THREAD_PRIORITY_ABOVE_NORMAL
      8 - 10 THREAD_PRIORITY_HIGHEST

    The type attribute represents an NDB thread type. The thread types supported, and the range of permitted count values for each, are provided in the following list:

    type属性表示ndb线程类型。以下列表提供了支持的线程类型以及每个线程允许的计数值范围:

    • ldm: Local query handler (DBLQH kernel block) that handles data. The more LDM threads that are used, the more highly partitioned the data becomes. Each LDM thread maintains its own sets of data and index partitions, as well as its own redo log. The value set for ldm must be one of the values 1, 2, 4, 6, 8, 12, 16, 24, or 32.

      ldm:处理数据的本地查询处理程序(dblqh内核块)。使用的ldm线程越多,数据的分区就越高。每个ldm线程都维护自己的数据集和索引分区,以及自己的重做日志。为ldm设置的值必须是值1、2、4、6、8、12、16、24或32之一。

      Changing the number of LDM threads normally requires an initial system restart to be effective and safe for cluster operations. This requirement is relaxed in NDB 7.6, as explained later in this section. (This is also true when this is done using MaxNoOfExecutionThreads.) NDB 7.5 and earlier: If IndexMemory usage is in excess of 50%, an initial restart of the cluster is required; a maximum of 30-35% IndexMemory usage is recommended in such cases. Otherwise, allocation of memory and LDM threads cannot be balanced between nodes, which can ultimately lead to data node failures.

      更改ldm线程数通常需要初始系统重新启动,以确保集群操作的有效性和安全性。如本节后面所述,在ndb 7.6中放宽了这一要求。(当使用Max NoFiffExordNo.x)NDB 7.5和更早时,这也是正确的:如果索引内存使用率超过50%,则需要重新启动群集;在这种情况下建议使用最多30%-35%的索引内存使用。否则,无法在节点之间平衡内存和ldm线程的分配,这最终可能导致数据节点失败。

      Adding large tablespaces (hundreds of gigabytes or more) for Disk Data tables when using more than the default number of LDMs may cause issues with resource and CPU usage if DiskPageBufferMemory is not sufficiently large.

      如果diskpagebuffermemory不够大,当使用的ldm超过默认数量时,为磁盘数据表添加大表空间(数百GB或更大)可能会导致资源和cpu使用问题。

    • tc: Transaction coordinator thread (DBTC kernel block) containing the state of an ongoing transaction. The maximum number of TC threads is 32.

      tc:包含正在进行的事务状态的事务协调器线程(dbtc内核块)。TC线程的最大数量为32。

      Optimally, every new transaction can be assigned to a new TC thread. In most cases 1 TC thread per 2 LDM threads is sufficient to guarantee that this can happen. In cases where the number of writes is relatively small when compared to the number of reads, it is possible that only 1 TC thread per 4 LQH threads is required to maintain transaction states. Conversely, in applications that perform a great many updates, it may be necessary for the ratio of TC threads to LDM threads to approach 1 (for example, 3 TC threads to 4 LDM threads).

      最理想的情况是,每个新事务都可以分配给一个新的tc线程。在大多数情况下,每2个ldm线程一个tc线程就足以保证这一点。如果与读取次数相比,写入次数相对较少,则可能只需要每4个lqh线程一个tc线程来维护事务状态。相反,在执行大量更新的应用程序中,tc线程与ldm线程的比率可能需要接近1(例如,3个tc线程与4个ldm线程)。

      Setting tc to 0 causes TC handling to be done by the main thread. In most cases, this is effectively the same as setting it to 1.

      将tc设置为0会导致主线程执行tc处理。在大多数情况下,这实际上与将其设置为1相同。

      Range: 0 - 32

      范围:0-32

    • main: Data dictionary and transaction coordinator (DBDIH and DBTC kernel blocks), providing schema management. This is always handled by a single dedicated thread.

      main:数据字典和事务协调器(dbdih和dbtc内核块),提供模式管理。这总是由一个专用线程处理的。

      Range: 1 only.

      范围:仅1。

    • recv: Receive thread (CMVMI kernel block). Each receive thread handles one or more sockets for communicating with other nodes in an NDB Cluster, with one socket per node. NDB Cluster supports multiple receive threads; the maximum is 16 such threads.

      接收线程(cmvmi内核块)。每个接收线程处理一个或多个用于与ndb集群中的其他节点通信的套接字,每个节点有一个套接字。NDB集群支持多个接收线程;最大的是16个这样的线程。

      Range: 1 - 16

      范围:1-16

    • send: Send thread (CMVMI kernel block). To increase throughput, it is possible to perform sends from one or more separate, dedicated threads (maximum 8).

      send:发送线程(cmvmi内核块)。为了增加吞吐量,可以执行来自一个或多个单独的专用线程的发送(最大值8)。

      Previously, all threads handled their own sending directly; this can still be made to happen by setting the number of send threads to 0 (this also happens when MaxNoOfExecutionThreads is set less than 10). While doing so can have an adeverse impact on throughput, it can also in some cases provide decreased latency.

      以前,所有线程都直接处理自己的发送;这仍然可以通过将发送线程数设置为0来实现(当maxNoofExecutionThreads设置为小于10时,也会发生这种情况)。虽然这样做会对吞吐量产生不利影响,但在某些情况下,它还可以降低延迟。

      Range: 0 - 16

      范围:0-16

    • rep: Replication thread (SUMA kernel block). Asynchronous replication operations are always handled by a single, dedicated thread.

      rep:复制线程(suma内核块)。异步复制操作始终由单个专用线程处理。

      Range: 1 only.

      范围:仅1。

    • io: File system and other miscellaneous operations. These are not demanding tasks, and are always handled as a group by a single, dedicated I/O thread.

      IO:文件系统和其他杂项操作。这些任务要求不高,并且总是由一个专用的I/O线程作为一个组来处理。

      Range: 1 only.

      范围:仅1。

    • watchdog: Parameters settings associated with this type are actually applied to several threads, each having a specific use. These threads include the SocketServer thread, which receives connection setups from other nodes; the SocketClient thread, which attempts to set up connections to other nodes; and the thread watchdog thread that checks that threads are progressing.

      看门狗:与此类型关联的参数设置实际上应用于多个线程,每个线程都有特定的用途。这些线程包括socketserver线程,它从其他节点接收连接设置;socketclient线程,它尝试设置到其他节点的连接;以及线程看门狗线程,它检查线程是否正在进行。

      Range: 1 only.

      范围:仅1。

    • idxbld: Offline index build threads. Unlike the other thread types listed previously, which are permanent, these are temporary threads which are created and used only during node or system restarts, or when running ndb_restore --rebuild-indexes. They may be bound to CPU sets which overlap with CPU sets bound to permanent thread types.

      IDXBLD:脱机索引生成线程。与前面列出的其他线程类型(它们是永久性的)不同,这些线程是临时线程,仅在节点或系统重新启动时,或在运行ndb_restore--rebuild索引时创建和使用。它们可能绑定到CPU集,与绑定到永久线程类型的CPU集重叠。

      thread_prio, realtime, and spintime values cannot be set for offline index build threads. In addition, count is ignored for this type of thread.

      无法为脱机索引生成线程设置线程优先级、实时和旋转时间值。此外,此类型的线程将忽略计数。

      If idxbld is not specified, the default behavior is as follows:

      如果未指定idxbld,则默认行为如下:

      • Offline index build threads are not bound if the I/O thread is also not bound, and these threads use any available cores.

        如果I/O线程也未绑定,则不绑定脱机索引生成线程,并且这些线程使用任何可用的内核。

      • If the I/O thread is bound, then the offline index build threads are bound to the entire set of bound threads, due to the fact that there should be no other tasks for these threads to perform.

        如果I/O线程已绑定,则脱机索引生成线程将绑定到整个绑定线程集,因为这些线程不应执行其他任务。

      Range: 0 - 1.

      范围:0-1。

      This thread type was added in NDB 7.6.4. (Bug #25835748, Bug #26928111)

      此线程类型是在ndb 7.6.4中添加的。(错误25835748,错误26928111)

Prior to NDB 7.6, changing ThreadCOnfig requires a system initial restart. In NDB 7.6 and later, this requirement can be relaxed under certain circumstances:

在ndb 7.6之前,更改threadconfig需要系统初始重新启动。在ndb 7.6及更高版本中,在某些情况下可以放宽这一要求:

  • If, following the change, the number of LDM threads remains the same as before, nothing more than a simple node restart (rolling restart, or N) is required to implement the change.

    如果在更改之后,ldm线程的数量保持不变,则只需要简单的节点重新启动(滚动重新启动,或n)即可实现更改。

  • Otherwise (that is, if the number of LDM threads changes), it is still possible to effect the change using a node initial restart (NI) provided the following two conditions are met:

    否则(也就是说,如果ldm线程的数量改变了),只要满足以下两个条件,仍然可以使用节点初始重新启动(ni)来影响改变:

    1. Each LDM thread handles a maximum of 8 fragments, and

      每个LDM线程最多处理8个片段,以及

    2. The total number of table fragments is an integer multiple of the number of LDM threads.

      表片段的总数是LDM线程数的整数倍。

In any other case, a system initial restart is needed to change this parameter.

在任何其他情况下,需要重新启动系统来更改此参数。

NDB 7.6.4 and later can distinguish between thread types by both of the following criteria:

ndb 7.6.4及更高版本可以通过以下两个标准区分线程类型:

  • Whether the thread is an execution thread. Threads of type main, ldm, recv, rep, tc, and send are execution threads; io, watchdog, and idxbld threads are not considered execution threads.

    线程是否为执行线程。main、ldm、recv、rep、tc和send类型的线程是执行线程;io、watchdog和idxbld线程不被视为执行线程。

  • Whether the allocation of threads to a given task is permanent or temporary. Currently all thread types except idxbld are considered permanent; idxbld threads are regarded as temporary threads.

    将线程分配给给定任务是永久的还是临时的。目前,除idxbld外的所有线程类型都被视为永久线程;idxbld线程被视为临时线程。

Simple examples:

简单示例:

# Example 1.

ThreadConfig=ldm={count=2,cpubind=1,2},main={cpubind=12},rep={cpubind=11}

# Example 2.

Threadconfig=main={cpubind=0},ldm={count=4,cpubind=1,2,5,6},io={cpubind=3}

It is usually desirable when configuring thread usage for a data node host to reserve one or more number of CPUs for operating system and other tasks. Thus, for a host machine with 24 CPUs, you might want to use 20 CPU threads (leaving 4 for other uses), with 8 LDM threads, 4 TC threads (half the number of LDM threads), 3 send threads, 3 receive threads, and 1 thread each for schema management, asynchronous replication, and I/O operations. (This is almost the same distribution of threads used when MaxNoOfExecutionThreads is set equal to 20.) The following ThreadConfig setting performs these assignments, additionally binding all of these threads to specific CPUs:

在为数据节点主机配置线程使用情况时,通常需要为操作系统和其他任务保留一个或多个CPU。因此,对于具有24个CPU的主机,您可能需要使用20个CPU线程(剩下4个用于其他用途),其中8个LDM线程、4个TC线程(LDM线程数的一半)、3个发送线程、3个接收线程和1个线程分别用于架构管理、异步复制和I/O操作。(这几乎与maxNoofExecutionThreads设置为20时使用的线程分布相同。)以下threadconfig设置执行这些分配,并将所有这些线程绑定到特定CPU:

ThreadConfig=ldm{count=8,cpubind=1,2,3,4,5,6,7,8},main={cpubind=9},io={cpubind=9}, \
rep={cpubind=10},tc{count=4,cpubind=11,12,13,14},recv={count=3,cpubind=15,16,17}, \
send{count=3,cpubind=18,19,20}

It should be possible in most cases to bind the main (schema management) thread and the I/O thread to the same CPU, as we have done in the example just shown.

在大多数情况下,应该可以将主(模式管理)线程和I/O线程绑定到同一个CPU,正如我们在刚刚显示的示例中所做的那样。

The following example incorporates groups of CPUs defined using both cpuset and cpubind, as well as use of thread prioritization.

下面的示例合并了使用cpuset和cpubind定义的cpu组,以及线程优先级的使用。

ThreadConfig=ldm={count=4,cpuset=0-3,thread_prio=8,spintime=200}, \
ldm={count=4,cpubind=4-7,thread_prio=8,spintime=200}, \
tc={count=4,cpuset=8-9,thread_prio=6},send={count=2,thread_prio=10,cpubind=10-11}, \
main={count=1,cpubind=10},rep={count=1,cpubind=11}
        

In this case we create two LDM groups; the first uses cpubind and the second uses cpuset. thread_prio and spintime are set to the same values for each group. This means there are eight LDM threads in total. (You should ensure that NoOfFragmentLogParts is also set to 8.) The four TC threads use only two CPUs; it is possible when using cpuset to specify fewer CPUs than threads in the group. (This is not true for cpubind.) The send threads use two threads using cpubind to bind these threads to CPUs 10 and 11. The main and rep threads can reuse these CPUs.

在本例中,我们创建了两个ldm组;第一个使用cpubind,第二个使用cpuset。线程优先级和旋转时间设置为每个组的相同值。这意味着总共有八个ldm线程。(您应该确保noofframgentlogparts也设置为8。)四个tc线程只使用两个cpu;使用cpuset指定的cpu比组中的线程少是可能的。(对于cpubind这不是真的。)发送线程使用两个使用cpubind的线程将这些线程绑定到cpu 10和11。主线程和rep线程可以重用这些cpu。

This example shows how ThreadConfig and NoOfFragmentLogParts might be set up for a 24-CPU host with hyperthreading, leaving CPUs 10, 11, 22, and 23 available for operating system functions and interrupts:

此示例显示如何为具有超线程的24-CPU主机设置threadconfig和noofframgentlogparts,使CPU 10、11、22和23可用于操作系统功能和中断:

NoOfFragmentLogParts=10
ThreadConfig=ldm={count=10,cpubind=0-4,12-16,thread_prio=9,spintime=200}, \
tc={count=4,cpuset=6-7,18-19,thread_prio=8},send={count=1,cpuset=8}, \
recv={count=1,cpuset=20},main={count=1,cpuset=9,21},rep={count=1,cpuset=9,21}, \
io={count=1,cpuset=9,21,thread_prio=8},watchdog={count=1,cpuset=9,21,thread_prio=9}

The next few examples include settings for idxbld. The first two of these demonstrate how a CPU set defined for idxbld can overlap those specified for other (permanent) thread types, the first using cpuset and the second using cpubind:

接下来的几个例子包括idxbld的设置。前两个演示了为idxbld定义的cpu集如何与为其他(永久)线程类型指定的cpu集重叠,第一个使用cpu set,第二个使用cpubind:

ThreadConfig=main,ldm={count=4,cpuset=1-4},tc={count=4,cpuset=5,6,7}, \
io={cpubind=8},idxbld={cpuset=1-8}

ThreadConfig=main,ldm={count=1,cpubind=1},idxbld={count=1,cpubind=1}

The next example specifies a CPU for the I/O thread, but not for the index build threads:

下一个示例指定I/O线程的CPU,但不指定索引生成线程的CPU:

ThreadConfig=main,ldm={count=4,cpuset=1-4},tc={count=4,cpuset=5,6,7}, \
io={cpubind=8}

Since the ThreadConfig setting just shown locks threads to eight cores numbered 1 through 8, it is equivalent to the setting shown here:

由于刚才显示的threadconfig设置将线程锁定到编号为1到8的八个核心,因此它相当于此处显示的设置:

ThreadConfig=main,ldm={count=4,cpuset=1-4},tc={count=4,cpuset=5,6,7}, \
io={cpubind=8},idxbld={cpuset=1,2,3,4,5,6,7,8}

In order to take advantage of the enhanced stability that the use of ThreadConfig offers, it is necessary to insure that CPUs are isolated, and that they not subject to interrupts, or to being scheduled for other tasks by the operating system. On many Linux systems, you can do this by setting IRQBALANCE_BANNED_CPUS in /etc/sysconfig/irqbalance to 0xFFFFF0, and by using the isolcpus boot option in grub.conf. For specific information, see your operating system or platform documentation.

为了利用threadconfig提供的增强的稳定性,必须确保cpu是隔离的,并且它们不受中断的影响,也不受操作系统为其他任务安排的影响。在许多Linux系统上,您可以通过将/etc/sysconfig/irqbalance中的irqbalance_bandented_CPU设置为0xfffff0,并使用grub.conf中的isolcpus boot选项来完成此操作。有关特定信息,请参阅操作系统或平台文档。

Disk Data Configuration Parameters.  Configuration parameters affecting Disk Data behavior include the following:

磁盘数据配置参数。影响磁盘数据行为的配置参数包括:

  • DiskPageBufferEntries

    磁盘页缓冲项

    Table 21.161 This table provides type and value information for the DiskPageBufferEntries data node configuration parameter

    表21.161此表提供了DiskPageBufferEntries数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units 32K pages
    Default 10
    Range 1 - 1000
    Restart Type N

    This is the number of page entries (page references) to allocate. It is specified as a number of 32K pages in DiskPageBufferMemory. The default is sufficient for most cases but you may need to increase the value of this parameter if you encounter problems with very large transactions on Disk Data tables. Each page entry requires approximately 100 bytes.

    这是要分配的页条目(页引用)数。它被指定为diskpagebuffermemory中32k页的数目。默认值在大多数情况下都已足够,但如果在磁盘数据表上遇到非常大的事务问题,则可能需要增加此参数的值。每个页面条目需要大约100个字节。

  • DiskPageBufferMemory

    磁盘缓冲存储器

    Table 21.162 This table provides type and value information for the DiskPageBufferMemory data node configuration parameter

    表21.162此表提供了DiskPageBufferMemory数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 64M
    Range 4M - 1T
    Restart Type N

    This determines the amount of space used for caching pages on disk, and is set in the [ndbd] or [ndbd default] section of the config.ini file. It is measured in bytes. Each page takes up 32 KB. This means that NDB Cluster Disk Data storage always uses N * 32 KB memory where N is some nonnegative integer.

    这决定了磁盘上用于缓存页面的空间量,并在config.ini文件的[ndbd]或[ndbd default]部分中设置。它以字节为单位。每页占用32 KB。这意味着ndb集群磁盘数据存储总是使用n*32kb内存,其中n是一些非负整数。

    The default value for this parameter is 64M (2000 pages of 32 KB each).

    此参数的默认值为64m(2000页,每个32 KB)。

    If the value for DiskPageBufferMemory is set too low in conjunction with using more than the default number of LDM threads in ThreadConfig (for example {ldm=6...}), problems can arise when trying to add a large (for example 500G) data file to a disk-based NDB table, wherein the process takes indefinitely long while occupying one of the CPU cores.

    如果diskpagebuffermemory的值设置得太低,并且在threadconfig中使用的ldm线程数超过了默认值(例如{ldm=6…}),那么在尝试向基于磁盘的ndb表中添加一个大的(例如500g)数据文件时可能会出现问题,在该表中,进程在占用一个cpu核心时会无限长地花费时间。

    This is due to the fact that, as part of adding a data file to a tablespace, extent pages are locked into memory in an extra PGMAN worker thread, for quick metadata access. When adding a large file, this worker has insufficient memory for all of the data file metadata. In such cases, you should either increase DiskPageBufferMemory, or add smaller tablespace files. You may also need to adjust DiskPageBufferEntries.

    这是因为,作为向表空间添加数据文件的一部分,扩展页被锁定在额外的pgman工作线程中的内存中,以便快速访问元数据。添加大文件时,此工作进程的内存不足,无法存储所有数据文件元数据。在这种情况下,您应该增加diskpagebuffermemory,或者添加更小的表空间文件。您可能还需要调整diskpagebufferentries。

    You can query the ndbinfo.diskpagebuffer table to help determine whether the value for this parameter should be increased to minimize unnecessary disk seeks. See Section 21.5.10.20, “The ndbinfo diskpagebuffer Table”, for more information.

    您可以查询ndbinfo.diskpagebuffer表,以帮助确定是否应增加此参数的值,以最小化不必要的磁盘查找。有关更多信息,请参阅第21.5.10.20节“ndbinfo diskpagebuffer表”。

  • SharedGlobalMemory

    共享全局内存

    Table 21.163 This table provides type and value information for the SharedGlobalMemory data node configuration parameter

    表21.163此表提供sharedglobalmemory数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 128M
    Range 0 - 64T
    Restart Type N

    This parameter determines the amount of memory that is used for log buffers, disk operations (such as page requests and wait queues), and metadata for tablespaces, log file groups, UNDO files, and data files. The shared global memory pool also provides memory used for satisfying the memory requirements of the UNDO_BUFFER_SIZE option used with CREATE LOGFILE GROUP and ALTER LOGFILE GROUP statements, including any default value implied for this options by the setting of the InitialLogFileGroup data node configuration parameter. SharedGlobalMemory can be set in the [ndbd] or [ndbd default] section of the config.ini configuration file, and is measured in bytes.

    此参数确定用于日志缓冲区、磁盘操作(如页请求和等待队列)以及表空间、日志文件组、撤消文件和数据文件的元数据的内存量。共享全局内存池还提供用于满足与CREATE LOGFILE GROUP和ALTER LOGFILE GROUP语句一起使用的undo_buffer_size选项的内存要求的内存,包括通过设置initiallogfilegroup数据节点配置参数为此选项隐含的任何默认值。sharedglobalmemory可以在config.ini配置文件的[ndbd]或[ndbd default]部分设置,以字节为单位。

    The default value is 128M.

    默认值为128m。

  • DiskIOThreadPool

    磁盘线程池

    Table 21.164 This table provides type and value information for the DiskIOThreadPool data node configuration parameter

    表21.164此表提供了diskiothreadpool数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units threads
    Default 2
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter determines the number of unbound threads used for Disk Data file access. Before DiskIOThreadPool was introduced, exactly one thread was spawned for each Disk Data file, which could lead to performance issues, particularly when using very large data files. With DiskIOThreadPool, you can—for example—access a single large data file using several threads working in parallel.

    此参数确定用于磁盘数据文件访问的未绑定线程数。在引入diskiothreadpool之前,每个磁盘数据文件只生成一个线程,这可能会导致性能问题,特别是在使用非常大的数据文件时。例如,使用diskiothreadpool,可以使用多个并行工作的线程访问单个大数据文件。

    This parameter applies to Disk Data I/O threads only.

    此参数仅适用于磁盘数据I/O线程。

    The optimum value for this parameter depends on your hardware and configuration, and includes these factors:

    此参数的最佳值取决于您的硬件和配置,包括以下因素:

    • Physical distribution of Disk Data files.  You can obtain better performance by placing data files, undo log files, and the data node file system on separate physical disks. If you do this with some or all of these sets of files, then you can set DiskIOThreadPool higher to enable separate threads to handle the files on each disk.

      磁盘数据文件的物理分布。通过将数据文件、撤消日志文件和数据节点文件系统放置在不同的物理磁盘上,可以获得更好的性能。如果对部分或全部这些文件集执行此操作,则可以将diskiothreadpool设置得更高,以使单独的线程能够处理每个磁盘上的文件。

    • Disk performance and types.  The number of threads that can be accommodated for Disk Data file handling is also dependent on the speed and throughput of the disks. Faster disks and higher throughput allow for more disk I/O threads. Our test results indicate that solid-state disk drives can handle many more disk I/O threads than conventional disks, and thus higher values for DiskIOThreadPool.

      磁盘性能和类型。可用于磁盘数据文件处理的线程数也取决于磁盘的速度和吞吐量。更快的磁盘和更高的吞吐量允许更多的磁盘I/O线程。我们的测试结果表明,固态磁盘驱动器可以处理比传统磁盘更多的磁盘I/O线程,因此diskiothreadpool的值更高。

    The default value for this parameter is 2.

    此参数的默认值为2。

  • Disk Data file system parameters.  The parameters in the following list make it possible to place NDB Cluster Disk Data files in specific directories without the need for using symbolic links.

    磁盘数据文件系统参数。下表中的参数使无需使用符号链接即可将ndb群集磁盘数据文件放置在特定目录中。

    • FileSystemPathDD

      文件系统移情

      Table 21.165 This table provides type and value information for the FileSystemPathDD data node configuration parameter

      表21.165此表提供fileSystemSynthedd数据节点配置参数的类型和值信息

      Property Value
      Version (or later) NDB 7.5.0
      Type or units filename
      Default [see text]
      Range ...
      Restart Type IN

      If this parameter is specified, then NDB Cluster Disk Data data files and undo log files are placed in the indicated directory. This can be overridden for data files, undo log files, or both, by specifying values for FileSystemPathDataFiles, FileSystemPathUndoFiles, or both, as explained for these parameters. It can also be overridden for data files by specifying a path in the ADD DATAFILE clause of a CREATE TABLESPACE or ALTER TABLESPACE statement, and for undo log files by specifying a path in the ADD UNDOFILE clause of a CREATE LOGFILE GROUP or ALTER LOGFILE GROUP statement. If FileSystemPathDD is not specified, then FileSystemPath is used.

      如果指定了此参数,则ndb cluster disk data files和undo log files将放置在指定的目录中。对于数据文件、撤消日志文件或两者都可以重写此选项,方法是指定fileSystemSynthDataFiles、fileSystemSynthUndoFiles或两者的值,如这些参数所述。对于数据文件,也可以通过在CREATE TABLESPACE或ALTER TABLESPACE语句的ADD DATAFILE子句中指定路径来重写,对于撤消日志文件,也可以通过在CREATE LOGFILE GROUP或ALTER LOGFILE GROUP语句的ADD UNDOFILE子句中指定路径来重写。如果未指定filesystem移情dd,则使用filesystem移情。

      If a FileSystemPathDD directory is specified for a given data node (including the case where the parameter is specified in the [ndbd default] section of the config.ini file), then starting that data node with --initial causes all files in the directory to be deleted.

      如果为给定的数据节点指定了一个fileSystemSynthedd目录(包括在config.ini文件的[ndbd default]部分中指定参数的情况),那么用--initial启动该数据节点将删除该目录中的所有文件。

    • FileSystemPathDataFiles

      文件系统移情数据文件

      Table 21.166 This table provides type and value information for the FileSystemPathDataFiles data node configuration parameter

      表21.166此表提供fileSystemSyntheDataFiles数据节点配置参数的类型和值信息

      Property Value
      Version (or later) NDB 7.5.0
      Type or units filename
      Default [see text]
      Range ...
      Restart Type IN

      If this parameter is specified, then NDB Cluster Disk Data data files are placed in the indicated directory. This overrides any value set for FileSystemPathDD. This parameter can be overridden for a given data file by specifying a path in the ADD DATAFILE clause of a CREATE TABLESPACE or ALTER TABLESPACE statement used to create that data file. If FileSystemPathDataFiles is not specified, then FileSystemPathDD is used (or FileSystemPath, if FileSystemPathDD has also not been set).

      如果指定了此参数,则ndb群集磁盘数据数据文件将放置在指定的目录中。这将覆盖为fileSystemSynthedd设置的任何值。通过在CREATE TABLESPACE或ALTER TABLESPACE语句的ADD DATAFILE子句中指定用于创建该数据文件的路径,可以覆盖给定数据文件的此参数。如果未指定filesystem移情数据文件,则使用filesystem移情dd(如果还未设置filesystem移情dd,则使用filesystem移情)。

      If a FileSystemPathDataFiles directory is specified for a given data node (including the case where the parameter is specified in the [ndbd default] section of the config.ini file), then starting that data node with --initial causes all files in the directory to be deleted.

      如果为给定的数据节点指定了一个fileSystemSyntheDataFiles目录(包括在config.ini文件的[ndbd default]部分中指定参数的情况),那么用--initial启动该数据节点将删除该目录中的所有文件。

    • FileSystemPathUndoFiles

      文件系统移情撤消文件

      Table 21.167 This table provides type and value information for the FileSystemPathUndoFiles data node configuration parameter

      表21.167此表提供了fileSystemSymphundoFiles数据节点配置参数的类型和值信息

      Property Value
      Version (or later) NDB 7.5.0
      Type or units filename
      Default [see text]
      Range ...
      Restart Type IN

      If this parameter is specified, then NDB Cluster Disk Data undo log files are placed in the indicated directory. This overrides any value set for FileSystemPathDD. This parameter can be overridden for a given data file by specifying a path in the ADD UNDO clause of a CREATE LOGFILE GROUP or ALTER LOGFILE GROUP statement used to create that data file. If FileSystemPathUndoFiles is not specified, then FileSystemPathDD is used (or FileSystemPath, if FileSystemPathDD has also not been set).

      如果指定此参数,则ndb cluster disk data undo日志文件将放置在指定的目录中。这将覆盖为fileSystemSynthedd设置的任何值。通过在CREATE LOGFILE GROUP或ALTER LOGFILE GROUP语句的ADD UNDO子句中指定用于创建该数据文件的路径,可以覆盖给定数据文件的此参数。如果未指定filesystem移情撤消文件,则使用filesystem移情dd(如果还未设置filesystem移情dd,则使用filesystem移情)。

      If a FileSystemPathUndoFiles directory is specified for a given data node (including the case where the parameter is specified in the [ndbd default] section of the config.ini file), then starting that data node with --initial causes all files in the directory to be deleted.

      如果为给定的数据节点指定了一个fileSystemSymphundoFiles目录(包括在config.ini文件的[ndbd default]部分中指定参数的情况),那么用--initial启动该数据节点将删除该目录中的所有文件。

    For more information, see Section 21.5.13.1, “NDB Cluster Disk Data Objects”.

    有关更多信息,请参阅21.5.13.1节“ndb群集磁盘数据对象”。

  • Disk Data object creation parameters.  The next two parameters enable you—when starting the cluster for the first time—to cause a Disk Data log file group, tablespace, or both, to be created without the use of SQL statements.

    磁盘数据对象创建参数。接下来的两个参数使您能够在首次启动集群时,在不使用sql语句的情况下创建磁盘数据日志文件组和/或表空间。

    • InitialLogFileGroup

      初始日志文件组

      Table 21.168 This table provides type and value information for the InitialLogFileGroup data node configuration parameter

      表21.168此表提供InitialLogFileGroup数据节点配置参数的类型和值信息

      Property Value
      Version (or later) NDB 7.5.0
      Type or units string
      Default [see text]
      Range ...
      Restart Type S

      This parameter can be used to specify a log file group that is created when performing an initial start of the cluster. InitialLogFileGroup is specified as shown here:

      此参数可用于指定在执行群集的初始启动时创建的日志文件组。InitialLogFileGroup如下所示:

      InitialLogFileGroup = [name=name;] [undo_buffer_size=size;] file-specification-list
      
      file-specification-list:
          file-specification[; file-specification[; ...]]
      
      file-specification:
          filename:size
      

      The name of the log file group is optional and defaults to DEFAULT-LG. The undo_buffer_size is also optional; if omitted, it defaults to 64M. Each file-specification corresponds to an undo log file, and at least one must be specified in the file-specification-list. Undo log files are placed according to any values that have been set for FileSystemPath, FileSystemPathDD, and FileSystemPathUndoFiles, just as if they had been created as the result of a CREATE LOGFILE GROUP or ALTER LOGFILE GROUP statement.

      日志文件组的名称是可选的,默认为default-lg。undo_buffer_size也是可选的;如果省略,则默认为64m。每个文件规范对应一个undo日志文件,并且必须在文件规范列表中至少指定一个。撤消日志文件是根据为fileSystemSysphen、fileSystemSysphendD和fileSystemSysphendUndoFiles设置的任何值放置的,就好像它们是由CREATE LOGFILE GROUP或ALTER LOGFILE GROUP语句创建的一样。

      Consider the following:

      请考虑以下几点:

      InitialLogFileGroup = name=LG1; undo_buffer_size=128M; undo1.log:250M; undo2.log:150M
      

      This is equivalent to the following SQL statements:

      这相当于以下SQL语句:

      CREATE LOGFILE GROUP LG1
          ADD UNDOFILE 'undo1.log'
          INITIAL_SIZE 250M
          UNDO_BUFFER_SIZE 128M
          ENGINE NDBCLUSTER;
      
      ALTER LOGFILE GROUP LG1
          ADD UNDOFILE 'undo2.log'
          INITIAL_SIZE 150M
          ENGINE NDBCLUSTER;
      

      This logfile group is created when the data nodes are started with --initial.

      当数据节点以--initial启动时,将创建此日志文件组。

      Resources for the initial log file group are added to the global memory pool along with those indicated by the value of SharedGlobalMemory.

      初始日志文件组的资源将与sharedglobalmemory值指示的资源一起添加到全局内存池中。

      This parameter, if used, should always be set in the [ndbd default] section of the config.ini file. The behavior of an NDB Cluster when different values are set on different data nodes is not defined.

      如果使用此参数,则应始终在config.ini文件的[ndbd default]部分中设置。未定义在不同数据节点上设置不同值时ndb群集的行为。

    • InitialTablespace

      初始表空间

      Table 21.169 This table provides type and value information for the InitialTablespace data node configuration parameter

      表21.169此表提供InitialTablespace数据节点配置参数的类型和值信息

      Property Value
      Version (or later) NDB 7.5.0
      Type or units string
      Default [see text]
      Range ...
      Restart Type S

      This parameter can be used to specify an NDB Cluster Disk Data tablespace that is created when performing an initial start of the cluster. InitialTablespace is specified as shown here:

      此参数可用于指定在执行群集的初始启动时创建的ndb群集磁盘数据表空间。initialTablespace如下所示:

      InitialTablespace = [name=name;] [extent_size=size;] file-specification-list
      

      The name of the tablespace is optional and defaults to DEFAULT-TS. The extent_size is also optional; it defaults to 1M. The file-specification-list uses the same syntax as shown with the InitialLogfileGroup parameter, the only difference being that each file-specification used with InitialTablespace corresponds to a data file. At least one must be specified in the file-specification-list. Data files are placed according to any values that have been set for FileSystemPath, FileSystemPathDD, and FileSystemPathDataFiles, just as if they had been created as the result of a CREATE TABLESPACE or ALTER TABLESPACE statement.

      表空间的名称是可选的,默认为default-ts。extent_size也是可选的;默认为1m。文件规范列表使用与initiallogfilegroup参数相同的语法,唯一的区别是与initialtablespace一起使用的每个文件规范都对应于一个数据文件。必须在文件规范列表中至少指定一个。数据文件是根据为fileSystemSynch、fileSystemSynchDD和fileSystemSynchDataFiles设置的任何值放置的,就好像它们是由create tablespace或alter tablespace语句创建的一样。

      For example, consider the following line specifying InitialTablespace in the [ndbd default] section of the config.ini file (as with InitialLogfileGroup, this parameter should always be set in the [ndbd default] section, as the behavior of an NDB Cluster when different values are set on different data nodes is not defined):

      例如,考虑在config.ini文件的[ndbd default]部分指定initialTablespace的以下行(与initialLogFileGroup一样,此参数应始终在[ndbd default]部分设置,因为未定义在不同数据节点上设置不同值时ndb群集的行为):

      InitialTablespace = name=TS1; extent_size=8M; data1.dat:2G; data2.dat:4G
      

      This is equivalent to the following SQL statements:

      这相当于以下SQL语句:

      CREATE TABLESPACE TS1
          ADD DATAFILE 'data1.dat'
          EXTENT_SIZE 8M
          INITIAL_SIZE 2G
          ENGINE NDBCLUSTER;
      
      ALTER TABLESPACE TS1
          ADD DATAFILE 'data2.dat'
          INITIAL_SIZE 4G
          ENGINE NDBCLUSTER;
      

      This tablespace is created when the data nodes are started with --initial, and can be used whenever creating NDB Cluster Disk Data tables thereafter.

      这个表空间是在数据节点以--initial开始时创建的,并且可以在以后创建ndb集群磁盘数据表时使用。

Disk Data and GCP Stop errors.  Errors encountered when using Disk Data tables such as Node nodeid killed this node because GCP stop was detected (error 2303) are often referred to as GCP stop errors. Such errors occur when the redo log is not flushed to disk quickly enough; this is usually due to slow disks and insufficient disk throughput.

磁盘数据和GCP停止错误。由于检测到gcp stop(错误2303),使用node nodeid等磁盘数据表时遇到的错误导致此节点终止,通常称为“gcp stop errors”。当重做日志没有足够快地刷新到磁盘时,会发生这种错误;这通常是由于磁盘速度慢和磁盘吞吐量不足造成的。

You can help prevent these errors from occurring by using faster disks, and by placing Disk Data files on a separate disk from the data node file system. Reducing the value of TimeBetweenGlobalCheckpoints tends to decrease the amount of data to be written for each global checkpoint, and so may provide some protection against redo log buffer overflows when trying to write a global checkpoint; however, reducing this value also permits less time in which to write the GCP, so this must be done with caution.

通过使用速度更快的磁盘,并将磁盘数据文件与数据节点文件系统放置在单独的磁盘上,可以帮助防止这些错误发生。减少lobalcheckpoints之间的时间值往往会减少每个全局检查点要写入的数据量,因此在尝试写入全局检查点时,可能会提供一些防止重做日志缓冲区溢出的保护;但是,减少此值也会减少写入gcp的时间,所以必须谨慎行事。

In addition to the considerations given for DiskPageBufferMemory as explained previously, it is also very important that the DiskIOThreadPool configuration parameter be set correctly; having DiskIOThreadPool set too high is very likely to cause GCP stop errors (Bug #37227).

除了前面介绍的对diskpagebuffermemory的注意事项外,正确设置diskiothreadpool配置参数也非常重要;将diskiothreadpool设置得太高很可能导致gcp停止错误(错误37227)。

GCP stops can be caused by save or commit timeouts; the TimeBetweenEpochsTimeout data node configuration parameter determines the timeout for commits. However, it is possible to disable both types of timeouts by setting this parameter to 0.

gcp停止可能由保存或提交超时引起;timebeteepochstimeout数据节点配置参数确定提交的超时。但是,可以通过将此参数设置为0来禁用这两种类型的超时。

Parameters for configuring send buffer memory allocation.  Send buffer memory is allocated dynamically from a memory pool shared between all transporters, which means that the size of the send buffer can be adjusted as necessary. (Previously, the NDB kernel used a fixed-size send buffer for every node in the cluster, which was allocated when the node started and could not be changed while the node was running.) The TotalSendBufferMemory and OverLoadLimit data node configuration parameters permit the setting of limits on this memory allocation. For more information about the use of these parameters (as well as SendBufferMemory), see Section 21.3.3.13, “Configuring NDB Cluster Send Buffer Parameters”.

用于配置发送缓冲区内存分配的参数。发送缓冲区内存是从所有传输程序之间共享的内存池动态分配的,这意味着可以根据需要调整发送缓冲区的大小。(以前,ndb内核为集群中的每个节点使用固定大小的发送缓冲区,该缓冲区是在节点启动时分配的,在节点运行时无法更改。)totalsendbuffermemory和overloadlimit数据节点配置参数允许设置此内存分配的限制。有关使用这些参数(以及sendbuffermemory)的更多信息,请参阅21.3.3.13节,“配置ndb集群发送缓冲区参数”。

  • ExtraSendBufferMemory

    外部缓冲存储器

    This parameter specifies the amount of transporter send buffer memory to allocate in addition to any set using TotalSendBufferMemory, SendBufferMemory, or both.

    此参数指定除了使用TotalSendBufferMemory、SendBufferMemory或两者同时使用的任何集合之外,要分配的传输程序发送缓冲区内存量。

  • TotalSendBufferMemory

    TotalSendBufferMemory内存

    This parameter is used to determine the total amount of memory to allocate on this node for shared send buffer memory among all configured transporters.

    此参数用于确定要在此节点上为所有已配置传输程序之间的共享发送缓冲区内存分配的内存总量。

    If this parameter is set, its minimum permitted value is 256KB; 0 indicates that the parameter has not been set. For more detailed information, see Section 21.3.3.13, “Configuring NDB Cluster Send Buffer Parameters”.

    如果设置了此参数,则其最小允许值为256KB;0表示尚未设置该参数。有关更多详细信息,请参阅第21.3.3.13节“配置ndb集群发送缓冲区参数”。

See also Section 21.5.15, “Adding NDB Cluster Data Nodes Online”.

另请参见第21.5.15节“在线添加ndb集群数据节点”。

Redo log over-commit handling.  It is possible to control a data node's handling of operations when too much time is taken flushing redo logs to disk. This occurs when a given redo log flush takes longer than RedoOverCommitLimit seconds, more than RedoOverCommitCounter times, causing any pending transactions to be aborted. When this happens, the API node that sent the transaction can handle the operations that should have been committed either by queuing the operations and re-trying them, or by aborting them, as determined by DefaultOperationRedoProblemAction. The data node configuration parameters for setting the timeout and number of times it may be exceeded before the API node takes this action are described in the following list:

重做提交处理日志。当将重做日志刷新到磁盘花费太多时间时,可以控制数据节点对操作的处理。当给定的重做日志刷新花费的时间超过redovercommitlimit秒,超过redovercommitcounter次,导致中止任何挂起的事务时,就会发生这种情况。发生这种情况时,发送事务的api节点可以处理本应提交的操作,方法是对操作进行排队并重新尝试,或者根据defaultoperationredoproblemanaction确定的方法中止这些操作。以下列表中描述了用于设置超时的数据节点配置参数以及在API节点执行此操作之前可能超出的超时次数:

  • RedoOverCommitCounter

    氧化还原计数器

    Table 21.170 This table provides type and value information for the RedoOverCommitCounter data node configuration parameter

    表21.170此表提供了redovercommitcounter数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default 3
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    When RedoOverCommitLimit is exceeded when trying to write a given redo log to disk this many times or more, any transactions that were not committed as a result are aborted, and an API node where any of these transactions originated handles the operations making up those transactions according to its value for DefaultOperationRedoProblemAction (by either queuing the operations to be re-tried, or aborting them).

    当多次或多次尝试将给定的重做日志写入磁盘时,如果超过了redovercommitlimit,则任何未提交的事务都将中止,并且这些事务中的任何一个起源的api节点将根据其defaultoperationredoprobleaction的值(通过将要重新尝试的操作排队,或者中止它们)。

    RedoOverCommitCounter defaults to 3. Set it to 0 to disable the limit.

    redovercommitcounter默认为3。将其设置为0可禁用该限制。

  • RedoOverCommitLimit

    氧化还原超限

    Table 21.171 This table provides type and value information for the RedoOverCommitLimit data node configuration parameter

    表21.171此表提供了redovercommitlimit数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units seconds
    Default 20
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter sets an upper limit in seconds for trying to write a given redo log to disk before timing out. The number of times the data node tries to flush this redo log, but takes longer than RedoOverCommitLimit, is kept and compared with RedoOverCommitCounter, and when flushing takes too long more times than the value of that parameter, any transactions that were not committed as a result of the flush timeout are aborted. When this occurs, the API node where any of these transactions originated handles the operations making up those transactions according to its DefaultOperationRedoProblemAction setting (it either queues the operations to be re-tried, or aborts them).

    此参数设置在超时之前尝试将给定重做日志写入磁盘的上限(秒)。数据节点尝试刷新此重做日志但花费的时间超过redovercommitlimit的次数将被保留并与redovercommitcounter进行比较,并且当刷新花费的时间超过该参数的值太长时,将中止由于刷新超时而未提交的任何事务。当发生这种情况时,这些事务的任何来源的api节点都会根据其defaultoperationredoproblementaction设置处理组成这些事务的操作(它会将要重试的操作排队,或者中止它们)。

    By default, RedoOverCommitLimit is 20 seconds. Set to 0 to disable checking for redo log flush timeouts.

    默认情况下,redovercommitlimit为20秒。设置为0可禁用对重做日志刷新超时的检查。

Controlling restart attempts.  It is possible to exercise finely-grained control over restart attempts by data nodes when they fail to start using the MaxStartFailRetries and StartFailRetryDelay data node configuration parameters.

控制重新启动尝试。当数据节点无法使用maxstartfailretries和startfailretrydelay数据节点配置参数启动时,可以对它们的重新启动尝试进行细粒度控制。

MaxStartFailRetries limits the total number of retries made before giving up on starting the data node, StartFailRetryDelay sets the number of seconds between retry attempts. These parameters are listed here:

MaxStartFailRetries限制在放弃启动数据节点之前重试的总数,StartFailRetryDelay设置重试之间的秒数。这些参数如下:

  • StartFailRetryDelay

    启动失败延迟

    Table 21.172 This table provides type and value information for the StartFailRetryDelay data node configuration parameter

    表21.172此表提供StartFailRetryDelay数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Use this parameter to set the number of seconds between restart attempts by the data node in the event on failure on startup. The default is 0 (no delay).

    使用此参数可设置在启动失败时数据节点重新启动尝试之间的秒数。默认值为0(无延迟)。

    Both this parameter and MaxStartFailRetries are ignored unless StopOnError is equal to 0.

    除非StopOneRor等于0,否则此参数和MaxStartFailRetries都将被忽略。

  • MaxStartFailRetries

    最大启动失败重试次数

    Table 21.173 This table provides type and value information for the MaxStartFailRetries data node configuration parameter

    表21.173此表提供MaxStartFailRetries数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 3
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Use this parameter to limit the number restart attempts made by the data node in the event that it fails on startup. The default is 3 attempts.

    使用此参数可以限制数据节点在启动失败时尝试重新启动的次数。默认为3次尝试。

    Both this parameter and StartFailRetryDelay are ignored unless StopOnError is equal to 0.

    除非StopOneRor等于0,否则此参数和StartFailRetryDelay都将被忽略。

NDB index statistics parameters.  The parameters in the following list relate to NDB index statistics generation.

ndb索引统计参数。下表中的参数与ndb索引统计生成相关。

  • IndexStatAutoCreate

    索引状态自动创建

    Table 21.174 This table provides type and value information for the IndexStatAutoCreate data node configuration parameter

    表21.174此表提供IndexStatAutoCreate数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0, 1
    Restart Type S

    Enable (set equal to 1) or disable (set equal to 0) automatic statistics collection when indexes are created. Disabled by default.

    创建索引时启用(设置为1)或禁用(设置为0)自动统计信息收集。默认情况下禁用。

  • IndexStatAutoUpdate

    索引状态自动更新

    Table 21.175 This table provides type and value information for the IndexStatAutoUpdate data node configuration parameter

    表21.175此表提供IndexStatAutoUpdate数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0, 1
    Restart Type S

    Enable (set equal to 1) or disable (set equal to 0) monitoring of indexes for changes and trigger automatic statistics updates these are detected. The amount and degree of change needed to trigger the updates are determined by the settings for the IndexStatTriggerPct and IndexStatTriggerScale options.

    启用(设置为1)或禁用(设置为0)监视索引的更改并触发检测到的自动统计信息更新。触发更新所需的更改量和程度由indexStatTriggerPCT和indexStatTriggerScale选项的设置确定。

  • IndexStatSaveSize

    索引状态存储大小

    Table 21.176 This table provides type and value information for the IndexStatSaveSize data node configuration parameter

    表21.176此表提供IndexStatSaveSize数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 32768
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type IN

    Maximum space in bytes allowed for the saved statistics of any given index in the NDB system tables and in the mysqld memory cache. In NDB 7.5 and earlier, this consumes IndexMemory.

    NDB系统表和MySQL内存缓存中任何给定索引的保存统计数据允许的最大空间。在ndb 7.5及更早版本中,这会消耗indexmemory。

    At least one sample is always produced, regardless of any size limit. This size is scaled by IndexStatSaveScale.

    无论大小限制如何,始终至少生产一个样品。此大小由IndexStatSaveScale缩放。

    The size specified by IndexStatSaveSize is scaled by the value of IndexStatTriggerPct for a large index, times 0.01. This is further multiplied by the logarithm to the base 2 of the index size. Setting IndexStatTriggerPct equal to 0 disables the scaling effect.

    indexStatSaveSize指定的大小按indexStatTriggerPCT值(对于大型索引)乘以0.01进行缩放。这将进一步乘以索引大小的底2的对数。将indexStatTriggerPCT设置为0将禁用缩放效果。

  • IndexStatSaveScale

    索引状态存储比例

    Table 21.177 This table provides type and value information for the IndexStatSaveScale data node configuration parameter

    表21.177此表提供IndexStatSaveScale数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units percentage
    Default 100
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type IN

    The size specified by IndexStatSaveSize is scaled by the value of IndexStatTriggerPct for a large index, times 0.01. This is further multiplied by the logarithm to the base 2 of the index size. Setting IndexStatTriggerPct equal to 0 disables the scaling effect.

    indexStatSaveSize指定的大小按indexStatTriggerPCT值(对于大型索引)乘以0.01进行缩放。这将进一步乘以索引大小的底2的对数。将indexStatTriggerPCT设置为0将禁用缩放效果。

  • IndexStatTriggerPct

    IndexStatTriggerPCT索引

    Table 21.178 This table provides type and value information for the IndexStatTriggerPct data node configuration parameter

    表21.178此表提供IndexStatTriggerPCT数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units percentage
    Default 100
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type IN

    Percentage change in updates that triggers an index statistics update. The value is scaled by IndexStatTriggerScale. You can disable this trigger altogether by setting IndexStatTriggerPct to 0.

    触发索引统计信息更新的更新百分比更改。该值按IndexStatTriggerScale缩放。通过将indexStatTriggerPCT设置为0,可以完全禁用此触发器。

  • IndexStatTriggerScale

    IndexStatTriggerScale指数

    Table 21.179 This table provides type and value information for the IndexStatTriggerScale data node configuration parameter

    表21.179此表提供IndexStatTriggerScale数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units percentage
    Default 100
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type IN

    Scale IndexStatTriggerPct by this amount times 0.01 for a large index. A value of 0 disables scaling.

    对于大型索引,按此量乘以0.01缩放IndexStatTriggerPCT。值为0将禁用缩放。

  • IndexStatUpdateDelay

    IndexStateUpdateDelay索引

    Table 21.180 This table provides type and value information for the IndexStatUpdateDelay data node configuration parameter

    表21.180此表提供IndexStateUpdateDelay数据节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units seconds
    Default 60
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type IN

    Minimum delay in seconds between automatic index statistics updates for a given index. Setting this variable to 0 disables any delay. The default is 60 seconds.

    给定索引的自动索引统计信息更新之间的最小延迟(秒)。将此变量设置为0将禁用任何延迟。默认值为60秒。

21.3.3.7 Defining SQL and Other API Nodes in an NDB Cluster

The [mysqld] and [api] sections in the config.ini file define the behavior of the MySQL servers (SQL nodes) and other applications (API nodes) used to access cluster data. None of the parameters shown is required. If no computer or host name is provided, any host can use this SQL or API node.

config.ini文件中的[mysqld]和[api]部分定义用于访问群集数据的mysql服务器(sql节点)和其他应用程序(api节点)的行为。不需要显示任何参数。如果未提供计算机或主机名,则任何主机都可以使用此SQL或API节点。

Generally speaking, a [mysqld] section is used to indicate a MySQL server providing an SQL interface to the cluster, and an [api] section is used for applications other than mysqld processes accessing cluster data, but the two designations are actually synonymous; you can, for instance, list parameters for a MySQL server acting as an SQL node in an [api] section.

一般来说,[mysqld]部分用于表示向集群提供sql接口的mysql服务器,[api]部分用于mysqld进程以外的应用程序访问集群数据,但这两个名称实际上是同义的;例如,在[API]部分中列出充当SQL节点的MySQL服务器的参数。

Note

For a discussion of MySQL server options for NDB Cluster, see Section 21.3.3.9.1, “MySQL Server Options for NDB Cluster”. For information about MySQL server system variables relating to NDB Cluster, see Section 21.3.3.9.2, “NDB Cluster System Variables”.

有关ndb集群的mysql服务器选项的讨论,请参阅21.3.3.9.1节“ndb集群的mysql服务器选项”。有关与ndb集群相关的mysql服务器系统变量的信息,请参阅21.3.3.9.2节“ndb集群系统变量”。

Restart types.  Information about the restart types used by the parameter descriptions in this section is shown in the following table:

重新启动类型。下表显示了有关本节中参数说明使用的重新启动类型的信息:

Table 21.181 NDB Cluster restart types

表21.181 ndb集群重启类型

Symbol Restart Type Description
N Node The parameter can be updated using a rolling restart (see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”)
S System All cluster nodes must be shut down completely, then restarted, to effect a change in this parameter
I Initial Data nodes must be restarted using the --initial option

  • Id

    身份证件

    Table 21.182 This table provides type and value information for the Id API node configuration parameter

    表21.182此表提供id api节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default [none]
    Range 1 - 255
    Restart Type IS

    The Id is an integer value used to identify the node in all cluster internal messages. The permitted range of values is 1 to 255 inclusive. This value must be unique for each node in the cluster, regardless of the type of node.

    id是一个整数值,用于标识所有集群内部消息中的节点。允许的值范围是1到255(包括1到255)。对于群集中的每个节点,无论节点的类型如何,此值都必须是唯一的。

    Note

    Data node IDs must be less than 49, regardless of the NDB Cluster version used. If you plan to deploy a large number of data nodes, it is a good idea to limit the node IDs for API nodes (and management nodes) to values greater than 48.

    无论使用的是ndb集群版本,数据节点id都必须小于49。如果计划部署大量数据节点,最好将api节点(和管理节点)的节点id限制为大于48的值。

    NodeId is the preferred parameter name to use when identifying API nodes. (Id continues to be supported for backward compatibility, but is now deprecated and generates a warning when used. It is also subject to future removal.)

    nodeid是标识api节点时使用的首选参数名。(id继续支持向后兼容性,但现在已弃用,使用时会生成警告。它也可能在将来被移除。)

  • ConnectionMap

    连接映射

    Table 21.183 This table provides type and value information for the ConnectionMap API node configuration parameter

    表21.183此表提供了connectionmap api节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units string
    Default [none]
    Range ...
    Restart Type N

    Specifies which data nodes to connect.

    指定要连接的数据节点。

  • NodeId

    节点ID

    Table 21.184 This table provides type and value information for the NodeId API node configuration parameter

    表21.184此表提供nodeid api节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default [none]
    Range 1 - 255
    Restart Type IS

    The NodeId is an integer value used to identify the node in all cluster internal messages. The permitted range of values is 1 to 255 inclusive. This value must be unique for each node in the cluster, regardless of the type of node.

    nodeid是一个整数值,用于标识所有集群内部消息中的节点。允许的值范围是1到255(包括1到255)。对于群集中的每个节点,无论节点的类型如何,此值都必须是唯一的。

    Note

    Data node IDs must be less than 49, regardless of the NDB Cluster version used. If you plan to deploy a large number of data nodes, it is a good idea to limit the node IDs for API nodes (and management nodes) to values greater than 48.

    无论使用的是ndb集群版本,数据节点id都必须小于49。如果计划部署大量数据节点,最好将api节点(和管理节点)的节点id限制为大于48的值。

    NodeId is the preferred parameter name to use when identifying management nodes. An alias, Id, was used for this purpose in very old versions of NDB Cluster, and continues to be supported for backward compatibility; it is now deprecated and generates a warning when used, and is subject to removal in a future release of NDB Cluster.

    nodeid是标识管理节点时使用的首选参数名。别名id在非常旧的ndb cluster版本中用于此目的,并继续支持向后兼容;现在不推荐使用它,使用时会生成警告,并且在将来的ndb cluster版本中可能会被删除。

  • ExecuteOnComputer

    执行计算机

    Table 21.185 This table provides type and value information for the ExecuteOnComputer API node configuration parameter

    表21.185此表提供ExecuteOnComputer API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name
    Default [none]
    Range ...
    Restart Type S

    This refers to the Id set for one of the computers (hosts) defined in a [computer] section of the configuration file.

    这是指为配置文件的[计算机]部分中定义的计算机(主机)之一设置的ID。

    Important

    This parameter is deprecated as of NDB 7.5.0, and is subject to removal in a future release. Use the HostName parameter instead.

    自ndb 7.5.0起,此参数已被弃用,并将在以后的版本中删除。请改用hostname参数。

  • HostName

    主机名

    Table 21.186 This table provides type and value information for the HostName API node configuration parameter

    表21.186此表提供主机名API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name or IP address
    Default [none]
    Range ...
    Restart Type N

    Specifying this parameter defines the hostname of the computer on which the SQL node (API node) is to reside. To specify a hostname, either this parameter or ExecuteOnComputer is required.

    指定此参数定义SQL节点(API节点)所在计算机的主机名。要指定主机名,需要此参数或executeoncomputer。

    If no HostName or ExecuteOnComputer is specified in a given [mysql] or [api] section of the config.ini file, then an SQL or API node may connect using the corresponding slot from any host which can establish a network connection to the management server host machine. This differs from the default behavior for data nodes, where localhost is assumed for HostName unless otherwise specified.

    如果在config.ini文件的给定[mysql]或[api]节中未指定主机名或executeoncomputer,则sql或api节点可以使用相应的“插槽”从任何可以建立到管理服务器主机的网络连接的主机进行连接。这与数据节点的默认行为不同,除非另有指定,否则假定主机名为localhost。

  • LocationDomainId

    位置域ID

    Table 21.187 This table provides type and value information for the LocationDomainId API node configuration parameter

    表21.187此表提供locationdomainid api节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.4
    Type or units integer
    Default 0
    Range 0 - 16
    Restart Type S

    Assigns an SQL or other API node to a specific availability domain (also known as an availability zone) within a cloud. By informing NDB which nodes are in which availability domains, performance can be improved in a cloud environment in the following ways:

    将SQL或其他API节点分配给云中的特定可用性域(也称为可用性区域)。通过通知ndb哪些节点位于哪些可用性域中,可以通过以下方式提高云环境中的性能:

    • If requested data is not found on the same node, reads can be directed to another node in the same availability domain.

      如果在同一个节点上找不到请求的数据,则可以将读取定向到同一可用性域中的另一个节点。

    • Communication between nodes in different availability domains are guaranteed to use NDB transporters' WAN support without any further manual intervention.

      不同可用域中的节点之间的通信保证使用ndb transporters的广域网支持,而无需任何进一步的手动干预。

    • The transporter's group number can be based on which availability domain is used, such that also SQL and other API nodes communicate with local data nodes in the same availability domain whenever possible.

      传输程序的组号可以基于使用的可用性域,以便SQL和其他API节点尽可能与同一可用性域中的本地数据节点通信。

    • The arbitrator can be selected from an availability domain in which no data nodes are present, or, if no such availability domain can be found, from a third availability domain.

      仲裁器可以从不存在数据节点的可用性域中选择,或者,如果找不到这样的可用性域,则可以从第三个可用性域中选择。

    LocationDomainId takes an integer value between 0 and 16 inclusive, with 0 being the default; using 0 is the same as leaving the parameter unset.

    locationdomainid接受一个介于0和16之间(包括0和16)的整数值,默认值为0;使用0与不设置参数相同。

  • ArbitrationRank

    仲裁等级

    Table 21.188 This table provides type and value information for the ArbitrationRank API node configuration parameter

    表21.188此表提供仲裁秩API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units 0-2
    Default 0
    Range 0 - 2
    Restart Type N

    This parameter defines which nodes can act as arbitrators. Both management nodes and SQL nodes can be arbitrators. A value of 0 means that the given node is never used as an arbitrator, a value of 1 gives the node high priority as an arbitrator, and a value of 2 gives it low priority. A normal configuration uses the management server as arbitrator, setting its ArbitrationRank to 1 (the default for management nodes) and those for all SQL nodes to 0 (the default for SQL nodes).

    此参数定义哪些节点可以充当仲裁器。管理节点和sql节点都可以是仲裁器。值为0表示给定节点从未用作仲裁器,值为1表示该节点作为仲裁器具有高优先级,值为2表示该节点具有低优先级。正常配置使用管理服务器作为仲裁器,将其仲裁等级设置为1(管理节点的默认值),将所有SQL节点的仲裁等级设置为0(SQL节点的默认值)。

    By setting ArbitrationRank to 0 on all management and SQL nodes, you can disable arbitration completely. You can also control arbitration by overriding this parameter; to do so, set the Arbitration parameter in the [ndbd default] section of the config.ini global configuration file.

    通过在所有管理和SQL节点上将仲裁等级设置为0,可以完全禁用仲裁。您还可以通过重写此参数来控制仲裁;为此,请在config.ini全局配置文件的[ndbd default]部分中设置仲裁参数。

  • ArbitrationDelay

    仲裁延迟

    Table 21.189 This table provides type and value information for the ArbitrationDelay API node configuration parameter

    表21.189此表提供仲裁延迟API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units milliseconds
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Setting this parameter to any other value than 0 (the default) means that responses by the arbitrator to arbitration requests will be delayed by the stated number of milliseconds. It is usually not necessary to change this value.

    将此参数设置为0以外的任何其他值(默认值)意味着仲裁器对仲裁请求的响应将延迟指定的毫秒数。通常不需要更改此值。

  • BatchByteSize

    批量测试

    Table 21.190 This table provides type and value information for the BatchByteSize API node configuration parameter

    表21.190此表提供BatchByteSize API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 16K
    Range 1K - 1M
    Restart Type N

    For queries that are translated into full table scans or range scans on indexes, it is important for best performance to fetch records in properly sized batches. It is possible to set the proper size both in terms of number of records (BatchSize) and in terms of bytes (BatchByteSize). The actual batch size is limited by both parameters.

    对于转换为索引上的完整表扫描或范围扫描的查询,以适当大小的批处理获取记录对于最佳性能非常重要。可以根据记录数(batchSize)和字节数(batchByteSize)设置适当的大小。实际批量大小受这两个参数的限制。

    The speed at which queries are performed can vary by more than 40% depending upon how this parameter is set.

    根据此参数的设置,执行查询的速度可以变化40%以上。

    This parameter is measured in bytes. The default value is 16K.

    此参数以字节为单位。默认值为16K。

  • BatchSize

    批量大小

    Table 21.191 This table provides type and value information for the BatchSize API node configuration parameter

    表21.191此表提供BatchSize API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units records
    Default 256
    Range 1 - 992
    Restart Type N

    This parameter is measured in number of records and is by default set to 256. The maximum size is 992.

    此参数以记录数度量,默认设置为256。最大尺寸为992。

  • ExtraSendBufferMemory

    外部缓冲存储器

    Table 21.192 This table provides type and value information for the ExtraSendBufferMemory API node configuration parameter

    表21.192此表提供ExtraSendBufferMemory API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter specifies the amount of transporter send buffer memory to allocate in addition to any that has been set using TotalSendBufferMemory, SendBufferMemory, or both.

    此参数指定除了使用TotalSendBufferMemory、SendBufferMemory或两者都设置的任何传输程序发送缓冲区内存之外,要分配的传输程序发送缓冲区内存量。

  • HeartbeatThreadPriority

    心跳线程优先级

    Table 21.193 This table provides type and value information for the HeartbeatThreadPriority API node configuration parameter

    表21.193此表提供HeartBeatThreadPriority API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units string
    Default [none]
    Range ...
    Restart Type S

    Use this parameter to set the scheduling policy and priority of heartbeat threads for management and API nodes. The syntax for setting this parameter is shown here:

    使用此参数设置管理和API节点的心跳线程的调度策略和优先级。设置此参数的语法如下所示:

    HeartbeatThreadPriority = policy[, priority]
    
    policy:
      {FIFO | RR}
    

    When setting this parameter, you must specify a policy. This is one of FIFO (first in, first in) or RR (round robin). This followed optionally by the priority (an integer).

    设置此参数时,必须指定策略。这是fifo(先进先出)或rr(循环)之一。后面是优先级(整数)。

  • MaxScanBatchSize

    最大扫描批大小

    Table 21.194 This table provides type and value information for the MaxScanBatchSize API node configuration parameter

    表21.194此表提供MaxScanBatchSize API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 256K
    Range 32K - 16M
    Restart Type N

    The batch size is the size of each batch sent from each data node. Most scans are performed in parallel to protect the MySQL Server from receiving too much data from many nodes in parallel; this parameter sets a limit to the total batch size over all nodes.

    批大小是从每个数据节点发送的每个批的大小。大多数扫描都是并行执行的,以防止mysql服务器从多个并行节点接收太多数据;此参数设置了对所有节点的总批处理大小的限制。

    The default value of this parameter is set to 256KB. Its maximum size is 16MB.

    此参数的默认值设置为256KB。其最大尺寸为16MB。

  • TotalSendBufferMemory

    TotalSendBufferMemory内存

    Table 21.195 This table provides type and value information for the TotalSendBufferMemory API node configuration parameter

    表21.195此表提供TotalSendBufferMemory API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 0
    Range 256K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter is used to determine the total amount of memory to allocate on this node for shared send buffer memory among all configured transporters.

    此参数用于确定要在此节点上为所有已配置传输程序之间的共享发送缓冲区内存分配的内存总量。

    If this parameter is set, its minimum permitted value is 256KB; 0 indicates that the parameter has not been set. For more detailed information, see Section 21.3.3.13, “Configuring NDB Cluster Send Buffer Parameters”.

    如果设置了此参数,则其最小允许值为256KB;0表示尚未设置该参数。有关更多详细信息,请参阅第21.3.3.13节“配置ndb集群发送缓冲区参数”。

  • AutoReconnect

    自动连接

    Table 21.196 This table provides type and value information for the AutoReconnect API node configuration parameter

    表21.196此表提供AutoReconnect API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default false
    Range true, false
    Restart Type N

    This parameter is false by default. This forces disconnected API nodes (including MySQL Servers acting as SQL nodes) to use a new connection to the cluster rather than attempting to re-use an existing one, as re-use of connections can cause problems when using dynamically-allocated node IDs. (Bug #45921)

    默认情况下,此参数为false。这会迫使断开的API节点(包括充当SQL节点的MySQL服务器)使用新的连接到集群,而不是试图重用现有的连接,因为在使用动态分配的节点ID时,连接的重新使用可能会导致问题。(错误45921)

    Note

    This parameter can be overridden using the NDB API. For more information, see Ndb_cluster_connection::set_auto_reconnect(), and Ndb_cluster_connection::get_auto_reconnect().

    此参数可以使用ndb api重写。有关详细信息,请参见ndb_cluster_connection::set_auto_reconnect(),和ndb_cluster_connection::get_auto_reconnect()。

  • DefaultOperationRedoProblemAction

    默认操作重做问题操作

    Table 21.197 This table provides type and value information for the DefaultOperationRedoProblemAction API node configuration parameter

    表21.197此表提供DefaultOperationRedoProblemAction API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units enumeration
    Default QUEUE
    Range ABORT, QUEUE
    Restart Type S

    This parameter (along with RedoOverCommitLimit and RedoOverCommitCounter) controls the data node's handling of operations when too much time is taken flushing redo logs to disk. This occurs when a given redo log flush takes longer than RedoOverCommitLimit seconds, more than RedoOverCommitCounter times, causing any pending transactions to be aborted.

    此参数(连同redovercommitlimit和redovercommitcounter)控制数据节点在将重做日志刷新到磁盘花费太多时间时对操作的处理。当给定的重做日志刷新花费的时间超过redovercommitlimit秒,超过redovercommitcounter次,导致中止任何挂起的事务时,就会发生这种情况。

    When this happens, the node can respond in either of two ways, according to the value of DefaultOperationRedoProblemAction, listed here:

    发生这种情况时,节点可以根据DefaultOperationRedoProblemAction的值以两种方式之一响应,如下所示:

    • ABORT: Any pending operations from aborted transactions are also aborted.

      中止:中止事务中的任何挂起操作也将中止。

    • QUEUE: Pending operations from transactions that were aborted are queued up to be re-tried. This the default. Pending operations are still aborted when the redo log runs out of space—that is, when P_TAIL_PROBLEM errors occur.

      队列:已中止的事务中的挂起操作将排队等待重新尝试。这是默认值。当重做日志空间不足时(即出现p_tail_problem错误时),挂起的操作仍将中止。

  • DefaultHashMapSize

    默认哈希映射大小

    Table 21.198 This table provides type and value information for the DefaultHashMapSize API node configuration parameter

    表21.198此表提供DefaultHashMapSize API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units buckets
    Default 3840
    Range 0 - 3840
    Restart Type N

    The size of the table hash maps used by NDB is configurable using this parameter. DefaultHashMapSize can take any of three possible values (0, 240, 3840). These values and their effects are described in the following table.

    ndb使用的表哈希映射的大小可以使用此参数进行配置。DefaultHashMapSize可以接受三个可能的值(0、240、3840)中的任意一个。下表描述了这些值及其影响。

    Table 21.199 DefaultHashMapSize parameter values

    表21.199 DefaultHashMapSize参数值

    Value Description / Effect
    0 Use the lowest value set, if any, for this parameter among all data nodes and API nodes in the cluster; if it is not set on any data or API node, use the default value.
    240 Original hash map size (used by default prior to NDB 7.2.7.
    3840 Larger hash map size as (used by default in NDB 7.2.7 and later

    The original intended use for this parameter was to facilitate upgrades and downgrades to and from older NDB Cluster versions, in which the hash map size differed, due to the fact that this change was not otherwise backward compatible. This is not an issue when upgrading or downgrading from NDB Cluster 7.5.

    此参数的最初用途是方便升级和降级到旧的ndb集群版本,其中散列映射大小不同,因为此更改在其他方面不向后兼容。从ndb cluster 7.5升级或降级时,这不是问题。

  • Wan

    广域网

    Table 21.200 This table provides type and value information for the wan API node configuration parameter

    表21.200此表提供WAN API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default false
    Range true, false
    Restart Type N

    Use WAN TCP setting as default.

    使用WAN TCP设置作为默认设置。

  • ConnectBackoffMaxTime

    连接后退最大时间

    Table 21.201 This table provides type and value information for the ConnectBackoffMaxTime API node configuration parameter

    表21.201此表提供了connectbackoffmaxtime api节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    In an NDB Cluster with many unstarted data nodes, the value of this parameter can be raised to circumvent connection attempts to data nodes which have not yet begun to function in the cluster, as well as moderate high traffic to management nodes. As long as the API node is not connected to any new data nodes, the value of the StartConnectBackoffMaxTime parameter is applied; otherwise, ConnectBackoffMaxTime is used to determine the length of time in milliseconds to wait between connection attempts.

    在具有许多未启动数据节点的ndb集群中,可以提高此参数的值,以避免连接到集群中尚未开始工作的数据节点的尝试,以及连接到管理节点的中等高流量。只要api节点没有连接到任何新的数据节点,就应用startconnectbackoffmaxtime参数的值;否则,connectbackoffmaxtime用于确定连接尝试之间等待的时间长度(以毫秒为单位)。

    Time elapsed during node connection attempts is not taken into account when calculating elapsed time for this parameter. The timeout is applied with approximately 100 ms resolution, starting with a 100 ms delay; for each subsequent attempt, the length of this period is doubled until it reaches ConnectBackoffMaxTime milliseconds, up to a maximum of 100000 ms (100s).

    在计算此参数的经过时间时,不考虑节点连接尝试期间经过的时间。超时应用约100毫秒分辨率,以100毫秒延迟开始;对于每一次后续尝试,该周期的长度加倍,直到它到达连接返回最大时间毫秒,最多达到100000毫秒(100s)。

    Once the API node is connected to a data node and that node reports (in a heartbeat message) that it has connected to other data nodes, connection attempts to those data nodes are no longer affected by this parameter, and are made every 100 ms thereafter until connected. Once a data node has started, it can take up HeartbeatIntervalDbApi for the API node to be notified that this has occurred.

    一旦api节点连接到一个数据节点,并且该节点报告(在heartbeat消息中)它已经连接到其他数据节点,那么到这些数据节点的连接尝试将不再受此参数的影响,并且在连接之前每100毫秒进行一次。一旦数据节点启动,它就可以占用heartbeatintervaldbapi来通知api节点发生了这种情况。

  • StartConnectBackoffMaxTime

    StartConnectBackoffMaxTime

    Table 21.202 This table provides type and value information for the StartConnectBackoffMaxTime API node configuration parameter

    表21.202此表提供StartConnectBackOffMaxTime API节点配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units integer
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    In an NDB Cluster with many unstarted data nodes, the value of this parameter can be raised to circumvent connection attempts to data nodes which have not yet begun to function in the cluster, as well as moderate high traffic to management nodes. As long as the API node is not connected to any new data nodes, the value of the StartConnectBackoffMaxTime parameter is applied; otherwise, ConnectBackoffMaxTime is used to determine the length of time in milliseconds to wait between connection attempts.

    在具有许多未启动数据节点的ndb集群中,可以提高此参数的值,以避免连接到集群中尚未开始工作的数据节点的尝试,以及连接到管理节点的中等高流量。只要api节点没有连接到任何新的数据节点,就应用startconnectbackoffmaxtime参数的值;否则,connectbackoffmaxtime用于确定连接尝试之间等待的时间长度(以毫秒为单位)。

    Time elapsed during node connection attempts is not taken into account when calculating elapsed time for this parameter. The timeout is applied with approximately 100 ms resolution, starting with a 100 ms delay; for each subsequent attempt, the length of this period is doubled until it reaches StartConnectBackoffMaxTime milliseconds, up to a maximum of 100000 ms (100s).

    在计算此参数的经过时间时,不考虑节点连接尝试期间经过的时间。超时应用约100毫秒分辨率,以100毫秒延迟开始;对于每一次后续尝试,该周期的长度加倍,直到达到STARTCONTCONTROBOFF最大时间毫秒,最多达到100000毫秒(100s)。

    Once the API node is connected to a data node and that node reports (in a heartbeat message) that it has connected to other data nodes, connection attempts to those data nodes are no longer affected by this parameter, and are made every 100 ms thereafter until connected. Once a data node has started, it can take up HeartbeatIntervalDbApi for the API node to be notified that this has occurred.

    一旦api节点连接到一个数据节点,并且该节点报告(在heartbeat消息中)它已经连接到其他数据节点,那么到这些数据节点的连接尝试将不再受此参数的影响,并且在连接之前每100毫秒进行一次。一旦数据节点启动,它就可以占用heartbeatintervaldbapi来通知api节点发生了这种情况。

API Node Debugging Parameters.  Beginning with NDB 7.5.2, you can use the ApiVerbose configuration parameter to enable debugging output from a given API node. This parameter takes an integer value. 0 is the default, and disables such debugging; 1 enables debugging output to the cluster log; 2 adds DBDICT debugging output as well. (Bug #20638450) See also DUMP 1229.

API节点调试参数。从ndb 7.5.2开始,可以使用apiverbose配置参数启用给定api节点的调试输出。此参数采用整数值。0是默认值,并禁用此类调试;1启用群集日志的调试输出;2还添加dbdict调试输出。(bug 20638450)另请参阅转储1229。

You can also obtain information from a MySQL server running as an NDB Cluster SQL node using SHOW STATUS in the mysql client, as shown here:

您还可以使用mysql客户机中的show status从作为ndb集群sql节点运行的mysql服务器获取信息,如下所示:

mysql> SHOW STATUS LIKE 'ndb%';
+-----------------------------+----------------+
| Variable_name               | Value          |
+-----------------------------+----------------+
| Ndb_cluster_node_id         | 5              |
| Ndb_config_from_host        | 198.51.100.112 |
| Ndb_config_from_port        | 1186           |
| Ndb_number_of_storage_nodes | 4              |
+-----------------------------+----------------+
4 rows in set (0.02 sec)

For information about the status variables appearing in the output from this statement, see Section 21.3.3.9.3, “NDB Cluster Status Variables”.

有关此语句输出中显示的状态变量的信息,请参阅21.3.3.9.3节“ndb cluster status variables”。

Note

To add new SQL or API nodes to the configuration of a running NDB Cluster, it is necessary to perform a rolling restart of all cluster nodes after adding new [mysqld] or [api] sections to the config.ini file (or files, if you are using more than one management server). This must be done before the new SQL or API nodes can connect to the cluster.

要将新的SQL或API节点添加到正在运行的NDB群集的配置中,必须在向config.ini文件(或文件,如果您使用多个管理服务器)添加新的[mysqld]或[api]节之后,对所有群集节点执行滚动重新启动。这必须在新的sql或api节点连接到集群之前完成。

It is not necessary to perform any restart of the cluster if new SQL or API nodes can employ previously unused API slots in the cluster configuration to connect to the cluster.

如果新的SQL或API节点可以在群集配置中使用以前未使用的API插槽来连接到群集,则无需重新启动群集。

21.3.3.8 Defining the System

The [system] section is used for parameters applying to the cluster as a whole. The Name system parameter is used with MySQL Enterprise Monitor; ConfigGenerationNumber and PrimaryMGMNode are not used in production environments. Except when using NDB Cluster with MySQL Enterprise Monitor, is not necessary to have a [system] section in the config.ini file.

[system]部分用于作为一个整体应用于集群的参数。name system参数与mysql enterprise monitor一起使用;configgenerationnumber和primarymgmnode不在生产环境中使用。除了将ndb cluster与mysql enterprise monitor一起使用时,不需要在config.ini文件中有[system]部分。

Restart types.  Information about the restart types used by the parameter descriptions in this section is shown in the following table:

重新启动类型。下表显示了有关本节中参数说明使用的重新启动类型的信息:

Table 21.203 NDB Cluster restart types

表21.203 ndb集群重启类型

Symbol Restart Type Description
N Node The parameter can be updated using a rolling restart (see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”)
S System All cluster nodes must be shut down completely, then restarted, to effect a change in this parameter
I Initial Data nodes must be restarted using the --initial option

More information about these parameters can be found in the following list:

有关这些参数的详细信息,请参见以下列表:

  • ConfigGenerationNumber

    配置生成编号

    Table 21.204 This table provides type and value information for the ConfigGenerationNumber system configuration parameter

    表21.204此表提供configurationNumber系统配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Configuration generation number. This parameter is currently unused.

    配置生成号。此参数当前未使用。

  • Name

    姓名

    Table 21.205 This table provides type and value information for the Name system configuration parameter

    表21.205此表提供名称系统配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units string
    Default [none]
    Range ...
    Restart Type N

    Set a name for the cluster. This parameter is required for deployments with MySQL Enterprise Monitor; it is otherwise unused.

    为群集设置名称。使用mysql enterprise monitor进行部署时需要此参数;否则将不使用此参数。

    You can obtain the value of this parameter by checking the Ndb_system_name status variable. In NDB API applications, you can also retrieve it using get_system_name().

    通过检查ndb_system_name状态变量,可以获取此参数的值。在ndb api应用程序中,还可以使用get_system_name()检索它。

  • PrimaryMGMNode

    原始节点

    Table 21.206 This table provides type and value information for the PrimaryMGMNode system configuration parameter

    表21.206此表提供PrimaryMGMNode系统配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Node ID of the primary management node. This parameter is currently unused.

    主管理节点的节点ID。此参数当前未使用。

21.3.3.9 MySQL Server Options and Variables for NDB Cluster

This section provides information about MySQL server options, server and status variables that are specific to NDB Cluster. For general information on using these, and for other options and variables not specific to NDB Cluster, see Section 5.1, “The MySQL Server”.

本节提供有关特定于ndb集群的mysql服务器选项、服务器和状态变量的信息。有关使用这些命令的一般信息,以及其他不特定于ndb集群的选项和变量,请参阅5.1节“mysql服务器”。

For NDB Cluster configuration parameters used in the cluster configuration file (usually named config.ini), see Section 21.3, “Configuration of NDB Cluster”.

有关群集配置文件(通常命名为config.ini)中使用的ndb群集配置参数,请参阅21.3节“ndb群集的配置”。

21.3.3.9.1 MySQL Server Options for NDB Cluster

This section provides descriptions of mysqld server options relating to NDB Cluster. For information about mysqld options not specific to NDB Cluster, and for general information about the use of options with mysqld, see Section 5.1.6, “Server Command Options”.

本节介绍与ndb集群相关的mysqld服务器选项。有关不特定于ndb集群的mysqld选项的信息,以及有关在mysqld中使用选项的一般信息,请参阅5.1.6节“服务器命令选项”。

For information about command-line options used with other NDB Cluster processes (ndbd, ndb_mgmd, and ndb_mgm), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”. For information about command-line options used with NDB utility programs (such as ndb_desc, ndb_size.pl, and ndb_show_tables), see Section 21.4, “NDB Cluster Programs”.

有关与其他ndb群集进程(ndbd、ndb_mgmd和ndb_mgm)一起使用的命令行选项的信息,请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。有关用于ndb实用程序的命令行选项(如ndb_desc、ndb_size.pl和ndb_show_tables)的信息,请参阅第21.4节“ndb cluster programs”。

  • --ndbcluster

    --NdbCluster公司

    Table 21.207 Type and value information for ndbcluster

    表21.207 ndbcluster的类型和值信息

    Property Value
    Name ndbcluster
    Command Line Yes
    System Variable No
    Status Variable No
    Option File Yes
    Scope
    Dynamic No
    Type
    Default, Range FALSE (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Enable NDB Cluster (if this version of MySQL supports it)

    描述:启用ndb集群(如果这个版本的mysql支持它)

    Disabled by --skip-ndbcluster.

    禁用者--跳过ndbcluster。


    The NDBCLUSTER storage engine is necessary for using NDB Cluster. If a mysqld binary includes support for the NDBCLUSTER storage engine, the engine is disabled by default. Use the --ndbcluster option to enable it. Use --skip-ndbcluster to explicitly disable the engine.

    使用ndb群集需要ndb cluster存储引擎。如果mysqld二进制文件包含对ndbcluster存储引擎的支持,则默认情况下会禁用该引擎。使用--ndbcluster选项启用它。使用--skip ndbcluster显式禁用引擎。

    It is not necessary or desirable to use this option together with --initialize. Beginning with NDB 7.5.4, --ndbcluster is ignored (and the NDB storage engine is not enabled) if --initialize is also used. (Bug #81689, Bug #23518923)

    不需要或不希望将此选项与--initialize一起使用。从ndb 7.5.4开始,如果还使用--initialize,则忽略-ndbcluster(并且未启用ndb存储引擎)。(错误81689,错误23518923)

  • --ndb-allow-copying-alter-table=[ON|OFF]

    --ndb allow copying alter table=[开关]

    Table 21.208 Type and value information for ndb-allow-copying-alter-table

    表21.208ndb的类型和值信息允许复制alter table

    Property Value
    Name ndb-allow-copying-alter-table
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range ON (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Set to OFF to keep ALTER TABLE from using copying operations on NDB tables

    description:设置为off以防止alter table对ndb表使用复制操作


    Let ALTER TABLE and other DDL statements use copying operations on NDB tables. Set to OFF to keep this from happening; doing so may improve performance of critical applications.

    让alter table和其他ddl语句对ndb表使用复制操作。设置为“关闭”以防止发生这种情况;这样做可能会提高关键应用程序的性能。

  • --ndb-batch-size=#

    --ndb批量大小=#

    Table 21.209 Type and value information for ndb-batch-size

    表21.209 ndb批量大小的类型和值信息

    Property Value
    Name ndb-batch-size
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range 32768 / 0 - 31536000 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Size (in bytes) to use for NDB transaction batches

    描述:用于ndb事务批处理的大小(字节)


    This sets the size in bytes that is used for NDB transaction batches.

    这将设置用于ndb事务批处理的字节大小。

  • --ndb-cluster-connection-pool=#

    --ndb群集连接池=#

    Table 21.210 Type and value information for ndb-cluster-connection-pool

    表21.210 ndb集群连接池的类型和值信息

    Property Value
    Name ndb-cluster-connection-pool
    Command Line Yes
    System Variable Yes
    Status Variable Yes
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range 1 / 1 - 63 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Number of connections to the cluster used by MySQL

    description:mysql使用的到集群的连接数


    By setting this option to a value greater than 1 (the default), a mysqld process can use multiple connections to the cluster, effectively mimicking several SQL nodes. Each connection requires its own [api] or [mysqld] section in the cluster configuration (config.ini) file, and counts against the maximum number of API connections supported by the cluster.

    通过将此选项设置为大于1的值(默认值),mysqld进程可以使用到集群的多个连接,有效地模拟多个sql节点。每个连接需要在群集配置(CONFIG.ini)文件中自己的[API]或[myQLDL]部分,并根据集群所支持的API连接的最大数量进行计数。

    Suppose that you have 2 cluster host computers, each running an SQL node whose mysqld process was started with --ndb-cluster-connection-pool=4; this means that the cluster must have 8 API slots available for these connections (instead of 2). All of these connections are set up when the SQL node connects to the cluster, and are allocated to threads in a round-robin fashion.

    假设您有两台集群主机,每台都运行一个SQL节点,其mysqld进程是以--ndb cluster connection pool=4启动的;这意味着集群必须有8个API插槽可用于这些连接(而不是2个)。所有这些连接都是在sql节点连接到集群时设置的,并以循环方式分配给线程。

    This option is useful only when running mysqld on host machines having multiple CPUs, multiple cores, or both. For best results, the value should be smaller than the total number of cores available on the host machine. Setting it to a value greater than this is likely to degrade performance severely.

    只有在具有多个CPU、多个内核或同时具有这两个内核的主机上运行mysqld时,此选项才有用。为了获得最佳结果,该值应小于主机上可用的核心总数。将其设置为大于此值可能会严重降低性能。

    Important

    Because each SQL node using connection pooling occupies multiple API node slots—each slot having its own node ID in the cluster—you must not use a node ID as part of the cluster connection string when starting any mysqld process that employs connection pooling.

    由于使用连接池的每个sql节点占用多个api节点插槽,因此在启动任何使用连接池的mysqld进程时,集群中每个插槽都有自己的节点id,因此不能将节点id用作集群连接字符串的一部分。

    Setting a node ID in the connection string when using the --ndb-cluster-connection-pool option causes node ID allocation errors when the SQL node attempts to connect to the cluster.

    使用--ndb cluster connection pool选项时在连接字符串中设置节点id会在sql节点尝试连接到群集时导致节点id分配错误。

  • --ndb-cluster-connection-pool-nodeids=list

    --ndb cluster connection pool nodeids=列表

    Table 21.211 Type and value information for ndb-cluster-connection-pool-nodeids

    表21.211 ndb集群连接池nodeid的类型和值信息

    Property Value
    Name ndb-cluster-connection-pool-nodeids
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range / (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Comma-separated list of node IDs for connections to the cluster used by MySQL; the number of nodes in the list must be the same as the value set for --ndb-cluster-connection-pool

    description:MySQL使用的连接到群集的节点ID的逗号分隔列表;列表中的节点数必须与为--ndb cluster connection pool设置的值相同


    Specifies a comma-separated list of node IDs for connections to the cluster used by an SQL node. The number of nodes in this list must be the same as the value set for the --ndb-cluster-connection-pool option.

    指定与SQL节点使用的群集连接的节点ID的逗号分隔列表。此列表中的节点数必须与为--ndb cluster connection pool选项设置的值相同。

    --ndb-cluster-connection-pool-nodeids was added in NDB 7.5.0.

    --在ndb 7.5.0中添加了ndb群集连接池nodeid。

  • --ndb-blob-read-batch-bytes=bytes

    --ndb blob read batch bytes=字节

    Table 21.212 Type and value information for ndb-blob-read-batch-bytes

    表21.212ndb blob读取批处理字节的类型和值信息

    Property Value
    Name ndb-blob-read-batch-bytes
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range 65536 / 0 - 4294967295 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Specifies size in bytes that large BLOB reads should be batched into. 0 = no limit.

    描述:指定大blob读取应批处理到的字节大小。0=无限制。


    This option can be used to set the size (in bytes) for batching of BLOB data reads in NDB Cluster applications. When this batch size is exceeded by the amount of BLOB data to be read within the current transaction, any pending BLOB read operations are immediately executed.

    此选项可用于设置ndb群集应用程序中blob数据读取的批处理大小(字节)。当当前事务中要读取的blob数据量超过此批处理大小时,将立即执行任何挂起的blob读取操作。

    The maximum value for this option is 4294967295; the default is 65536. Setting it to 0 has the effect of disabling BLOB read batching.

    此选项的最大值为4294967295;默认值为65536。将其设置为0具有禁用blob读取批处理的效果。

    Note

    In NDB API applications, you can control BLOB write batching with the setMaxPendingBlobReadBytes() and getMaxPendingBlobReadBytes() methods.

    在ndb api应用程序中,可以使用setmaxpendingblobreadbytes()和getmaxpendingblobreadbytes()方法控制blob写批处理。

  • --ndb-blob-write-batch-bytes=bytes

    --ndb blob write batch bytes=字节

    Table 21.213 Type and value information for ndb-blob-write-batch-bytes

    表21.213 ndb blob写入批处理字节的类型和值信息

    Property Value
    Name ndb-blob-write-batch-bytes
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range 65536 / 0 - 4294967295 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Specifies size in bytes that large BLOB writes should be batched into. 0 = no limit.

    描述:指定大blob写入应批处理到的字节大小。0=无限制。


    This option can be used to set the size (in bytes) for batching of BLOB data writes in NDB Cluster applications. When this batch size is exceeded by the amount of BLOB data to be written within the current transaction, any pending BLOB write operations are immediately executed.

    此选项可用于设置在ndb群集应用程序中批处理blob数据写入的大小(字节)。当当前事务中要写入的blob数据量超过此批处理大小时,将立即执行任何挂起的blob写入操作。

    The maximum value for this option is 4294967295; the default is 65536. Setting it to 0 has the effect of disabling BLOB write batching.

    此选项的最大值为4294967295;默认值为65536。将其设置为0具有禁用blob写批处理的效果。

    Note

    In NDB API applications, you can control BLOB write batching with the setMaxPendingBlobWriteBytes() and getMaxPendingBlobWriteBytes() methods.

    在ndb api应用程序中,可以使用setmaxpendingblobwritebytes()和getmaxpendingblobwritebytes()方法控制blob写批处理。

  • --ndb-connectstring=connection_string

    --ndb connectstring=连接字符串

    Table 21.214 Type and value information for ndb-connectstring

    表21.214 ndb connectstring的类型和值信息

    Property Value
    Name ndb-connectstring
    Command Line Yes
    System Variable No
    Status Variable No
    Option File Yes
    Scope
    Dynamic No
    Type
    Default, Range (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Point to the management server that distributes the cluster configuration

    描述:指向分发群集配置的管理服务器


    When using the NDBCLUSTER storage engine, this option specifies the management server that distributes cluster configuration data. See Section 21.3.3.3, “NDB Cluster Connection Strings”, for syntax.

    使用ndbcluster存储引擎时,此选项指定分发群集配置数据的管理服务器。语法见21.3.3.3节,“ndb集群连接字符串”。

  • --ndb-default-column-format=[FIXED|DYNAMIC]

    --ndb默认列格式=[固定动态]

    Table 21.215 Type and value information for ndb-default-column-format

    表21.215 ndb默认列格式的类型和值信息

    Property Value
    Name ndb-default-column-format
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range DYNAMIC / FIXED, DYNAMIC (Version: 5.7.11-ndb-7.5.1)
    Default, Range FIXED / FIXED, DYNAMIC (Version: 5.7.16-ndb-7.5.4)
    Notes

    DESCRIPTION: Use this value (FIXED or DYNAMIC) by default for COLUMN_FORMAT and ROW_FORMAT options when creating or adding columns to a table.

    描述:在创建列或向表中添加列时,默认情况下将此值(固定或动态)用于列格式和行格式选项。


    In NDB 7.5.1 and later, sets the default COLUMN_FORMAT and ROW_FORMAT for new tables (see Section 13.1.18, “CREATE TABLE Syntax”).

    在ndb 7.5.1及更高版本中,设置新表的默认列格式和行格式(请参见第13.1.18节“创建表语法”)。

    In NDB 7.5.1, the default for this option was DYNAMIC; in NDB 7.5.4, the default was changed to FIXED to maintain backwards compatibility with older release series (Bug #24487363).

    在ndb 7.5.1中,此选项的默认值为dynamic;在ndb7.5.4中,默认值更改为fixed,以保持与旧版本系列的向后兼容性(bug 24487363)。

  • --ndb-deferred-constraints=[0|1]

    --ndb延迟约束=[0 1]

    Table 21.216 Type and value information for ndb-deferred-constraints

    表21.216 ndb延迟约束的类型和值信息

    Property Value
    Name ndb-deferred-constraints
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range 0 / 0 - 1 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Specifies that constraint checks on unique indexes (where these are supported) should be deferred until commit time. Not normally needed or used; for testing purposes only.

    描述:指定对唯一索引(在支持这些索引的情况下)的约束检查应推迟到提交时间。通常不需要或不使用;仅用于测试目的。


    Controls whether or not constraint checks on unique indexes are deferred until commit time, where such checks are supported. 0 is the default.

    控制是否将对唯一索引的约束检查推迟到提交时(在提交时支持此类检查)。0是默认值。

    This option is not normally needed for operation of NDB Cluster or NDB Cluster Replication, and is intended primarily for use in testing.

    此选项通常不需要用于操作ndb群集或ndb群集复制,主要用于测试。

  • --ndb-distribution=[KEYHASH|LINHASH]

    --ndb分布=[keyhash linhash]

    Table 21.217 Type and value information for ndb-distribution

    表21.217国家开发银行分布类型和价值信息

    Property Value
    Name ndb-distribution
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range KEYHASH / LINHASH, KEYHASH (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Default distribution for new tables in NDBCLUSTER (KEYHASH or LINHASH, default is KEYHASH)

    描述:ndbcluster中新表的默认分布(keyhash或linhash,默认为keyhash)


    Controls the default distribution method for NDB tables. Can be set to either of KEYHASH (key hashing) or LINHASH (linear hashing). KEYHASH is the default.

    控制ndb表的默认分发方法。可以设置为keyhash(密钥哈希)或linhash(线性哈希)。keyhash是默认值。

  • --ndb-log-apply-status

    --ndb日志应用状态

    Table 21.218 Type and value information for ndb-log-apply-status

    表21.218 ndb日志应用状态的类型和值信息

    Property Value
    Name ndb-log-apply-status
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Cause a MySQL server acting as a slave to log mysql.ndb_apply_status updates received from its immediate master in its own binary log, using its own server ID. Effective only if the server is started with the --ndbcluster option.

    描述:导致作为从服务器的MySQL服务器使用自己的服务器ID将从其直接主机收到的MySQL.ndb_apply_状态更新记录在自己的二进制日志中。仅当服务器使用--ndbcluster选项启动时有效。


    Causes a slave mysqld to log any updates received from its immediate master to the mysql.ndb_apply_status table in its own binary log using its own server ID rather than the server ID of the master. In a circular or chain replication setting, this allows such updates to propagate to the mysql.ndb_apply_status tables of any MySQL servers configured as slaves of the current mysqld.

    使从属mysqld使用自己的服务器id而不是主服务器的服务器id将从其直接主服务器收到的任何更新记录到自己的二进制日志中的mysql.ndb_apply_status表中。在循环或链复制设置中,这允许此类更新传播到任何配置为当前mysqld的从属mysql服务器的mysql.ndb_apply_状态表。

    In a chain replication setup, using this option allows downstream (slave) clusters to be aware of their positions relative to all of their upstream contributors (masters).

    在链式复制设置中,使用此选项允许下游(从属)集群知道其相对于所有上游贡献者(主)的位置。

    In a circular replication setup, this option causes changes to ndb_apply_status tables to complete the entire circuit, eventually propagating back to the originating NDB Cluster. This also allows a cluster acting as a master to see when its changes (epochs) have been applied to the other clusters in the circle.

    在循环复制设置中,此选项会导致对ndb_apply_状态表的更改完成整个电路,最终传播回原始ndb群集。这还允许作为主集群的集群查看其更改(epochs)何时已应用于圆中的其他集群。

    This option has no effect unless the MySQL server is started with the --ndbcluster option.

    除非使用--ndbcluster选项启动mysql服务器,否则此选项无效。

  • --ndb-log-empty-epochs=[ON|OFF]

    --ndb log empty epochs=[开关]

    Table 21.219 Type and value information for ndb-log-empty-epochs

    表21.219 ndb日志空时段的类型和值信息

    Property Value
    Name ndb-log-empty-epochs
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: When enabled, causes epochs in which there were no changes to be written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

    描述:启用时,即使启用了从日志更新,也会导致未对ndb_apply_状态和ndb_binlog_索引表进行任何更改的时间段。


    Causes epochs during which there were no changes to be written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

    导致即使启用了从日志更新,也不会对ndb_apply_状态和ndb_binlog_索引表写入任何更改的时段。

    By default this option is disabled. Disabling --ndb-log-empty-epochs causes epoch transactions with no changes not to be written to the binary log, although a row is still written even for an empty epoch in ndb_binlog_index.

    默认情况下,此选项被禁用。禁用--ndb log empty epoch会导致不做任何更改的epoch事务不会写入二进制日志,尽管即使ndb_binlog_index中的epoch为空,仍会写入一行。

    Because --ndb-log-empty-epochs=1 causes the size of the ndb_binlog_index table to increase independently of the size of the binary log, users should be prepared to manage the growth of this table, even if they expect the cluster to be idle a large part of the time.

    因为--ndb log empty epochs=1会导致ndb binlog_索引表的大小独立于二进制日志的大小而增加,所以用户应该准备好管理该表的增长,即使他们希望集群在很大程度上处于空闲状态。

  • --ndb-log-empty-update=[ON|OFF]

    --ndb log empty update=[开关]

    Table 21.220 Type and value information for ndb-log-empty-update

    表21.220 ndb日志空更新的类型和值信息

    Property Value
    Name ndb-log-empty-update
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: When enabled, causes updates that produced no changes to be written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

    说明:启用后,即使启用了日志从更新,也会导致未生成更改的更新写入ndb_apply_状态和ndb_binlog_索引表。


    Causes updates that produced no changes to be written to the ndb_apply_status and ndb_binlog_index tables, when when log_slave_updates is enabled.

    当启用从日志更新时,将导致未生成更改的更新写入ndb_apply_状态和ndb_binlog_索引表。

    By default this option is disabled (OFF). Disabling --ndb-log-empty-update causes updates with no changes not to be written to the binary log.

    默认情况下,此选项被禁用(关闭)。禁用--ndb log empty update会导致没有更改的更新不会写入二进制日志。

  • --ndb-log-exclusive-reads=[0|1]

    --ndb log exclusive reads=[0 1]

    Table 21.221 Type and value information for ndb-log-exclusive-reads

    表21.221 ndb日志独占读取的类型和值信息

    Property Value
    Name ndb-log-exclusive-reads
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range 0 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Log primary key reads with exclusive locks; allow conflict resolution based on read conflicts

    描述:使用独占锁记录主键读取;允许基于读取冲突解决冲突


    Starting the server with this option causes primary key reads to be logged with exclusive locks, which allows for NDB Cluster Replication conflict detection and resolution based on read conflicts. You can also enable and disable these locks at runtime by setting the value of the ndb_log_exclusive_reads system variable to 1 or 0, respectively. 0 (disable locking) is the default.

    使用此选项启动服务器会导致主键读取记录为独占锁,这允许基于读取冲突检测和解决ndb群集复制冲突。还可以在运行时通过将ndb_log_exclusive_reads系统变量的值分别设置为1或0来启用和禁用这些锁。默认为0(禁用锁定)。

    For more information, see Read conflict detection and resolution.

    有关详细信息,请参阅读取冲突检测和解决。

  • --ndb-log-orig

    --ndb原木

    Table 21.222 Type and value information for ndb-log-orig

    表21.222 ndb原木的类型和值信息

    Property Value
    Name ndb-log-orig
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Log originating server id and epoch in mysql.ndb_binlog_index table

    description:mysql.ndb_binlog_索引表中的日志发起服务器id和epoch


    Log the originating server ID and epoch in the ndb_binlog_index table.

    在ndb-binlog-u索引表中记录原始服务器id和epoch。

    Note

    This makes it possible for a given epoch to have multiple rows in ndb_binlog_index, one for each originating epoch.

    这使得给定的epoch可以在ndb-binlog-u索引中有多行,每个原始epoch有一行。

    For more information, see Section 21.6.4, “NDB Cluster Replication Schema and Tables”.

    有关更多信息,请参阅21.6.4节“ndb群集复制模式和表”。

  • --ndb-log-transaction-id

    --ndb日志事务ID

    Table 21.223 Type and value information for ndb-log-transaction-id

    表21.223 ndb日志事务id的类型和值信息

    Property Value
    Name ndb-log-transaction-id
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Write NDB transaction IDs in the binary log. Requires --log-bin-v1-events=OFF.

    描述:在二进制日志中写入ndb事务id。需要--log-bin-v1-events=off。


    Causes a slave mysqld to write the NDB transaction ID in each row of the binary log. Such logging requires the use of the Version 2 event format for the binary log; thus, the log_bin_use_v1_row_events system variable must be disabled to use this option.

    使从属mysqld在二进制日志的每一行中写入ndb事务id。此类日志记录要求对二进制日志使用版本2事件格式;因此,必须禁用LOG BIN U USE U v1 U ROW U EVERTS系统变量才能使用此选项。

    This option is not supported in mainline MySQL Server 5.7. It is required to enable NDB Cluster Replication conflict detection and resolution using the NDB$EPOCH_TRANS() function (see NDB$EPOCH_TRANS()).

    主线MySQL Server 5.7不支持此选项。需要使用ndb$epoch_trans()函数启用ndb群集复制冲突检测和解决(请参阅ndb$epoch_trans())。

    The default value is FALSE.

    默认值为false。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • --ndb-log-update-minimal

    --最小ndb日志更新

    Table 21.224 Type and value information for ndb-log-update-minimal

    表21.224 ndb log update minimal的类型和值信息

    Property Value
    Name ndb-log-update-minimal
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range OFF (Version: 5.6.36-ndb-7.4.16)
    Default, Range OFF (Version: 5.7.18-ndb-7.5.7)
    Default, Range OFF (Version: 5.7.18-ndb-7.6.3)
    Notes

    DESCRIPTION: Log updates in a minimal format.

    描述:以最小格式更新日志。


    Log updates in a minimal fashion, by writing only the primary key values in the before image, and only the changed columns in the after image. This may cause compatibility problems if replicating to storage engines other than NDB.

    日志以最小的方式更新,只在前一个映像中写入主键值,在后一个映像中只写入已更改的列。如果复制到ndb以外的存储引擎,这可能会导致兼容性问题。

  • --ndb-mgmd-host=host[:port]

    --ndb mgmd host=主机[:端口]

    Table 21.225 Type and value information for ndb-mgmd-host

    表21.225ndb-mgmd主机的类型和值信息

    Property Value
    Name ndb-mgmd-host
    Command Line Yes
    System Variable No
    Status Variable No
    Option File Yes
    Scope
    Dynamic No
    Type
    Default, Range localhost:1186 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Set the host (and port, if desired) for connecting to management server

    描述:设置要连接到管理服务器的主机(和端口,如果需要)


    Can be used to set the host and port number of a single management server for the program to connect to. If the program requires node IDs or references to multiple management servers (or both) in its connection information, use the --ndb-connectstring option instead.

    可用于设置程序连接到的单个管理服务器的主机和端口号。如果程序在其连接信息中需要节点ID或对多个管理服务器(或两者)的引用,请改用--ndb connectstring选项。

  • --ndb-nodeid=#

    --NDB节点ID=#

    Table 21.226 Type and value information for ndb-nodeid

    表21.226 ndb nodeid的类型和值信息

    Property Value
    Name ndb-nodeid
    Command Line Yes
    System Variable No
    Status Variable Yes
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range / 1 - 63 (Version: 5.0.45)
    Default, Range / 1 - 255 (Version: 5.1.5)
    Notes

    DESCRIPTION: NDB Cluster node ID for this MySQL server

    描述:此MySQL服务器的NDB群集节点ID


    Set this MySQL server's node ID in an NDB Cluster.

    在ndb集群中设置此mysql服务器的节点id。

    The --ndb-nodeid option overrides any node ID set with --ndb-connectstring, regardless of the order in which the two options are used.

    --ndb node id选项覆盖使用--ndb connectstring设置的任何节点id,而不管这两个选项的使用顺序如何。

    In addition, if --ndb-nodeid is used, then either a matching node ID must be found in a [mysqld] or [api] section of config.ini, or there must be an open [mysqld] or [api] section in the file (that is, a section without a NodeId or Id parameter specified). This is also true if the node ID is specified as part of the connection string.

    此外,如果使用--ndb node id,那么必须在config.ini的[mysqld]或[api]节中找到匹配的节点id,或者在文件中必须有“open”[mysqld]或[api]节(即,没有指定nodeid或id参数的节)。如果将节点id指定为连接字符串的一部分,则也会出现这种情况。

    Regardless of how the node ID is determined, its is shown as the value of the global status variable Ndb_cluster_node_id in the output of SHOW STATUS, and as cluster_node_id in the connection row of the output of SHOW ENGINE NDBCLUSTER STATUS.

    无论如何确定节点ID,其在显示状态输出中显示为全局状态变量ndb_cluster_node_id的值,在显示引擎ndbcluster status输出的连接行中显示为cluster_node_id。

    For more information about node IDs for NDB Cluster SQL nodes, see Section 21.3.3.7, “Defining SQL and Other API Nodes in an NDB Cluster”.

    有关ndb cluster sql节点的节点id的更多信息,请参阅21.3.3.7节“在ndb集群中定义sql和其他api节点”。

  • --ndb-optimization-delay=milliseconds

    --ndb优化延迟=毫秒

    Table 21.227 Type and value information for ndb-optimization-delay

    表21.227 ndb优化延迟的类型和值信息

    Property Value
    Name ndb-optimization-delay
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File No
    Scope Global
    Dynamic Yes
    Type
    Default, Range 10 / 0 - 100000 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Sets the number of milliseconds to wait between processing sets of rows by OPTIMIZE TABLE on NDB tables

    描述:设置通过优化ndb表上的表在处理行集之间等待的毫秒数


    Set the number of milliseconds to wait between sets of rows by OPTIMIZE TABLE statements on NDB tables. The default is 10.

    通过优化ndb表上的表语句,设置行集合之间等待的毫秒数。默认值为10。

  • --ndb-recv-thread-activation-threshold=threshold

    --ndb recv thread activation threshold=阈值

    Table 21.228 Type and value information for ndb-recv-thread-activation-threshold

    表21.228 ndb recv线程激活阈值的类型和值信息

    Property Value
    Name ndb-recv-thread-activation-threshold
    Command Line Yes
    System Variable No
    Status Variable No
    Option File Yes
    Scope
    Dynamic No
    Type
    Default, Range 8 / 0 (MIN_ACTIVATION_THRESHOLD) - 16 (MAX_ACTIVATION_THRESHOLD) (Version: 5.6.10-ndb-7.3.1)
    Notes

    DESCRIPTION: Activation threshold when receive thread takes over the polling of the cluster connection (measured in concurrently active threads)

    描述:当接收线程接管群集连接的轮询时的激活阈值(以并发活动线程度量)


    When this number of concurrently active threads is reached, the receive thread takes over polling of the cluster connection.

    当达到此并发活动线程数时,接收线程将接管群集连接的轮询。

  • --ndb-recv-thread-cpu-mask=bitmask

    --ndb recv thread cpu mask=位掩码

    Table 21.229 Type and value information for ndb-recv-thread-cpu-mask

    表21.229 ndb recv线程cpu掩码的类型和值信息

    Property Value
    Name ndb-recv-thread-cpu-mask
    Command Line Yes
    System Variable No
    Status Variable No
    Option File Yes
    Scope
    Dynamic No
    Type
    Default, Range [empty] (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: CPU mask for locking receiver threads to specific CPUs; specified as hexadecimal. See documentation for details.

    描述:用于将接收器线程锁定到特定CPU的CPU掩码;指定为十六进制。有关详细信息,请参见文档。


    Set a CPU mask for locking receiver threads to specific CPUs. This is specified as a hexadecimal bitmask; for example, 0x33 means that one CPU is used per receiver thread. An empty string (no locking of receiver threads) is the default.

    设置用于将接收器线程锁定到特定CPU的CPU掩码。这被指定为十六进制位掩码;例如,0x33表示每个接收器线程使用一个CPU。默认为空字符串(不锁定接收器线程)。

  • ndb-transid-mysql-connection-map=state

    ndb transid mysql connection map=状态

    Table 21.230 Type and value information for ndb-transid-mysql-connection-map

    表21.230 ndb transid mysql连接图类型及值信息

    Property Value
    Name ndb-transid-mysql-connection-map
    Command Line Yes
    System Variable No
    Status Variable No
    Option File No
    Scope
    Dynamic No
    Type
    Default, Range ON / ON, OFF, FORCE (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Enable or disable the ndb_transid_mysql_connection_map plugin; that is, enable or disable the INFORMATION_SCHEMA table having that name

    描述:启用或禁用ndb_transid_mysql_connection_map plugin;即,启用或禁用具有该名称的信息架构表


    Enables or disables the plugin that handles the ndb_transid_mysql_connection_map table in the INFORMATION_SCHEMA database. Takes one of the values ON, OFF, or FORCE. ON (the default) enables the plugin. OFF disables the plugin, which makes ndb_transid_mysql_connection_map inaccessible. FORCE keeps the MySQL Server from starting if the plugin fails to load and start.

    启用或禁用处理信息架构数据库中的ndb_transid_mysql_connection_map表的插件。取其中一个值开、关或力。on(默认)启用插件。off禁用插件,这将使ndb_transid_mysql_connection_map不可访问。如果插件无法加载和启动,force将阻止mysql服务器启动。

    You can see whether the ndb_transid_mysql_connection_map table plugin is running by checking the output of SHOW PLUGINS.

    通过检查show plugins的输出,可以看到ndb_transid_mysql_connection_map table插件是否正在运行。

  • --ndb-wait-connected=seconds

    --ndb wait connected=秒

    Table 21.231 Type and value information for ndb-wait-connected

    表21.231 ndb wait connected的类型和值信息

    Property Value
    Name ndb-wait-connected
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range 0 / 0 - 31536000 (Version: NDB 7.5-7.6)
    Default, Range 30 / 0 - 31536000 (Version: 5.1.56-ndb-7.0.27)
    Default, Range 0 / 0 - 31536000 (Version: NDB 7.5-7.6)
    Default, Range 30 / 0 - 31536000 (Version: 5.1.56-ndb-7.1.16)
    Notes

    DESCRIPTION: Time (in seconds) for the MySQL server to wait for connection to cluster management and data nodes before accepting MySQL client connections

    描述:MySQL服务器在接受MySQL客户端连接之前等待连接到群集管理和数据节点的时间(秒)


    This option sets the period of time that the MySQL server waits for connections to NDB Cluster management and data nodes to be established before accepting MySQL client connections. The time is specified in seconds. The default value is 30.

    此选项设置MySQL服务器在接受MySQL客户端连接之前等待与NDB群集管理和数据节点建立连接的时间段。时间以秒为单位指定。默认值为30。

  • --ndb-wait-setup=seconds

    --ndb wait setup=秒

    Table 21.232 Type and value information for ndb-wait-setup

    表21.232 ndb wait设置的类型和值信息

    Property Value
    Name ndb-wait-setup
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range 15 / 0 - 31536000 (Version: 5.1.39-ndb-6.2.19)
    Default, Range 15 / 0 - 31536000 (Version: 5.1.39-ndb-6.3.28)
    Default, Range 15 / 0 - 31536000 (Version: 5.1.39-ndb-7.0.9)
    Default, Range 30 / 0 - 31536000 (Version: 5.1.56-ndb-7.0.27)
    Default, Range 15 / 0 - 31536000 (Version: 5.1.39-ndb-7.1.0)
    Default, Range 30 / 0 - 31536000 (Version: 5.1.56-ndb-7.1.16)
    Notes

    DESCRIPTION: Time (in seconds) for the MySQL server to wait for NDB engine setup to complete

    描述:mysql服务器等待ndb引擎安装完成的时间(秒)


    This variable shows the period of time that the MySQL server waits for the NDB storage engine to complete setup before timing out and treating NDB as unavailable. The time is specified in seconds. The default value is 30.

    此变量显示mysql服务器在超时并将ndb视为不可用之前等待ndb存储引擎完成安装的时间段。时间以秒为单位指定。默认值为30。

  • --server-id-bits=#

    --服务器ID位=#

    Table 21.233 Type and value information for server-id-bits

    表21.233服务器id位的类型和值信息

    Property Value
    Name server-id-bits
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range 32 / 7 - 32 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Sets the number of least significant bits in the server_id actually used for identifying the server, permitting NDB API applications to store application data in the most significant bits. server_id must be less than 2 to the power of this value.

    描述:设置服务器id中实际用于标识服务器的最低有效位的数量,允许ndb api应用程序将应用程序数据存储在最高有效位中。服务器ID必须小于此值的2倍。


    This option indicates the number of least significant bits within the 32-bit server_id which actually identify the server. Indicating that the server is actually identified by fewer than 32 bits makes it possible for some of the remaining bits to be used for other purposes, such as storing user data generated by applications using the NDB API's Event API within the AnyValue of an OperationOptions structure (NDB Cluster uses the AnyValue to store the server ID).

    此选项指示32位服务器id中实际标识服务器的最低有效位的数目。如果指示服务器实际上由少于32位标识,则剩余的一些位可以用于其他目的,例如将应用程序使用ndb api的事件api生成的用户数据存储在operationoptions结构的anyvalue中(ndb cluster使用anyvalue存储服务器id)。

    When extracting the effective server ID from server_id for purposes such as detection of replication loops, the server ignores the remaining bits. The --server-id-bits option is used to mask out any irrelevant bits of server_id in the IO and SQL threads when deciding whether an event should be ignored based on the server ID.

    当从服务器id中提取有效的服务器id以用于检测复制循环时,服务器将忽略剩余的位。--server id bits选项用于在根据服务器ID决定是否应忽略事件时,屏蔽IO和SQL线程中任何与服务器ID无关的位。

    This data can be read from the binary log by mysqlbinlog, provided that it is run with its own --server-id-bits option set to 32 (the default).

    这个数据可以由mysqlbinlog从二进制日志中读取,只要它在运行时将自己的--server id bits选项设置为32(默认值)。

    The value of server_id must be less than 2 ^ server_id_bits; otherwise, mysqld refuses to start.

    服务器ID的值必须小于2^SERVER ID;否则,mysqld拒绝启动。

    This system variable is supported only by NDB Cluster. It is not supported in the standard MySQL 5.7 Server.

    此系统变量仅由ndb cluster支持。标准的mysql 5.7服务器不支持它。

  • --skip-ndbcluster

    --跳过NdbCluster

    Table 21.234 Type and value information for skip-ndbcluster

    表21.234跳过ndbcluster的类型和值信息

    Property Value
    Name skip-ndbcluster
    Command Line Yes
    System Variable No
    Status Variable No
    Option File Yes
    Scope
    Dynamic No
    Type
    Notes

    DESCRIPTION: Disable the NDB Cluster storage engine

    描述:禁用ndb群集存储引擎


    Disable the NDBCLUSTER storage engine. This is the default for binaries that were built with NDBCLUSTER storage engine support; the server allocates memory and other resources for this storage engine only if the --ndbcluster option is given explicitly. See Section 21.3.1, “Quick Test Setup of NDB Cluster”, for an example.

    禁用ndbcluster存储引擎。这是使用ndbcluster存储引擎支持构建的二进制文件的默认值;只有在显式给定--ndbcluster选项时,服务器才会为此存储引擎分配内存和其他资源。例如,见第21.3.1节“ndb集群的快速测试设置”。

21.3.3.9.2 NDB Cluster System Variables

This section provides detailed information about MySQL server system variables that are specific to NDB Cluster and the NDB storage engine. For system variables not specific to NDB Cluster, see Section 5.1.7, “Server System Variables”. For general information on using system variables, see Section 5.1.8, “Using System Variables”.

本节提供有关特定于ndb集群和ndb存储引擎的mysql服务器系统变量的详细信息。对于不特定于ndb集群的系统变量,请参见第5.1.7节“服务器系统变量”。有关使用系统变量的一般信息,请参见第5.1.8节“使用系统变量”。

  • ndb_autoincrement_prefetch_sz

    ndb_autoincrement_prefetch_sz

    Table 21.235 Type and value information for ndb_autoincrement_prefetch_sz

    表21.235 ndb_autoincrement_prefetch_sz的类型和值信息

    Property Value
    Name ndb_autoincrement_prefetch_sz
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range 32 / 1 - 256 (Version: NDB 7.5-7.6)
    Default, Range 1 / 1 - 256 (Version: 5.0.56)
    Default, Range 32 / 1 - 256 (Version: 5.1.1)
    Default, Range 1 / 1 - 256 (Version: 5.1.23)
    Default, Range 32 / 1 - 256 (Version: 5.1.16-ndb-6.2.0)
    Default, Range 1 / 1 - 256 (Version: 5.1.23-ndb-6.2.10)
    Default, Range 32 / 1 - 256 (Version: 5.1.19-ndb-6.3.0)
    Default, Range 1 / 1 - 256 (Version: 5.1.23-ndb-6.3.7)
    Default, Range 1 / 1 - 65536 (Version: 5.1.41-ndb-6.3.31)
    Default, Range 32 / 1 - 256 (Version: 5.1.30-ndb-6.4.0)
    Default, Range 1 / 1 - 65536 (Version: 5.1.41-ndb-7.0.11)
    Default, Range 1 / 1 - 65536 (Version: 5.5.15-ndb-7.2.1)
    Notes

    DESCRIPTION: NDB auto-increment prefetch size

    描述:ndb自动增量预取大小


    Determines the probability of gaps in an autoincremented column. Set it to 1 to minimize this. Setting it to a high value for optimization makes inserts faster, but decreases the likelihood that consecutive autoincrement numbers will be used in a batch of inserts. The mininum and default value is 1. The maximum value for ndb_autoincrement_prefetch_sz is 65536.

    确定自动递增列中出现间距的概率。将其设置为1可将此最小化。将其设置为高值以进行优化会使插入更快,但会降低在一批插入中使用连续的自动递增数字的可能性。最小值和默认值是1。最大值为65536。

    This variable affects only the number of AUTO_INCREMENT IDs that are fetched between statements; within a given statement, at least 32 IDs are obtained at a time. The default value is 1.

    此变量仅影响在语句之间获取的自动递增ID的数量;在给定的语句中,一次至少获得32个ID。默认值为1。

    Important

    This variable does not affect inserts performed using INSERT ... SELECT.

    此变量不影响使用INSERT执行的插入…选择。

  • ndb_cache_check_time

    ndb_缓存检查时间

    Table 21.236 Type and value information for ndb_cache_check_time

    表21.236 ndb_cache_check_time的类型和值信息

    Property Value
    Name ndb_cache_check_time
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range 0 / - (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Number of milliseconds between checks of cluster SQL nodes made by the MySQL query cache

    description:MySQL查询缓存对群集SQL节点的检查间隔毫秒数


    The number of milliseconds that elapse between checks of NDB Cluster SQL nodes by the MySQL query cache. Setting this to 0 (the default and minimum value) means that the query cache checks for validation on every query.

    MySQL查询缓存检查NDB群集SQL节点之间经过的毫秒数。将此值设置为0(默认值和最小值)意味着查询缓存检查每个查询的有效性。

    The recommended maximum value for this variable is 1000, which means that the check is performed once per second. A larger value means that the check is performed and possibly invalidated due to updates on different SQL nodes less often. It is generally not desirable to set this to a value greater than 2000.

    此变量的推荐最大值为1000,这意味着每秒执行一次检查。较大的值意味着执行检查,并且可能由于不同SQL节点上的更新次数较少而无效。通常不希望将此值设置为大于2000的值。

    Note

    The query cache is deprecated as of MySQL 5.7.20, and is removed in MySQL 8.0. Deprecation includes ndb_cache_check_time.

    从mysql 5.7.20开始,查询缓存已被弃用,并在mysql 8.0中被删除。不推荐包括ndb_cache_check_时间。

  • ndb_clear_apply_status

    ndb_clear_apply_状态

    Table 21.237 Type and value information for ndb_clear_apply_status

    表21.237 ndb_clear_apply_状态的类型和值信息

    Property Value
    Name ndb_clear_apply_status
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File No
    Scope Global
    Dynamic Yes
    Type
    Default, Range ON (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Causes RESET SLAVE to clear all rows from the ndb_apply_status table; ON by default

    描述:使reset slave清除ndb_apply_状态表中的所有行;默认为on


    By the default, executing RESET SLAVE causes an NDB Cluster replication slave to purge all rows from its ndb_apply_status table. You can disable this by setting ndb_clear_apply_status=OFF.

    默认情况下,执行reset slave会导致ndb cluster replication slave清除其ndb_apply_状态表中的所有行。您可以通过设置ndb_clear_apply_status=off来禁用此功能。

  • ndb_data_node_neighbour

    ndb_data_node_邻居

    Table 21.238 Type and value information for ndb_data_node_neighbour

    表21.238 ndb_data_neighbor的类型和值信息

    Property Value
    Name ndb_data_node_neighbour
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range 0 / 0 - 255 (Version: 5.7.12-ndb-7.5.2)
    Notes

    DESCRIPTION: Specifies cluster data node "closest" to this MySQL Server, for transaction hinting and fully replicated tables

    描述:为事务提示和完全复制的表指定“最接近”此mysql服务器的群集数据节点


    Sets the ID of a nearest data node—that is, a preferred nonlocal data node is chosen to execute the transaction, rather than one running on the same host as the SQL or API node. This used to ensure that when a fully replicated table is accessed, we access it on this data node, to ensure that the local copy of the table is always used whenever possible. This can also be used for providing hints for transactions.

    设置“最近”数据节点的ID,即选择首选的非本地数据节点来执行事务,而不是选择与SQL或API节点在同一主机上运行的数据节点。这用于确保在访问完全复制的表时,我们在此数据节点上访问它,以确保尽可能始终使用表的本地副本。这也可以用于为事务提供提示。

    This can improve data access times in the case of a node that is physically closer than and thus has higher network throughput than others on the same host.

    如果一个节点的物理距离比同一主机上的其他节点更近,因而具有更高的网络吞吐量,则这可以提高数据访问时间。

    See Section 13.1.18.10, “Setting NDB_TABLE Options”, for further information.

    有关更多信息,请参见第13.1.18.10节“设置ndb_表选项”。

    Added in NDB 7.5.2.

    在ndb 7.5.2中添加。

    Note

    An equivalent method set_data_node_neighbour() is provided for use in NDB API applications.

    提供了用于ndb api应用程序的等效方法set_data_node_neighbor()。

  • ndb_default_column_format

    ndb_默认_列格式

    Table 21.239 Type and value information for ndb_default_column_format

    表21.239ndb_default_column_格式的类型和值信息

    Property Value
    Name ndb_default_column_format
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range DYNAMIC / FIXED, DYNAMIC (Version: 5.7.11-ndb-7.5.1)
    Default, Range FIXED / FIXED, DYNAMIC (Version: 5.7.16-ndb-7.5.4)
    Notes

    DESCRIPTION: Sets default row format and column format (FIXED or DYNAMIC) used for new NDB tables

    描述:设置用于新ndb表的默认行格式和列格式(固定或动态)


    In NDB 7.5.1 and later, sets the default COLUMN_FORMAT and ROW_FORMAT for new tables (see Section 13.1.18, “CREATE TABLE Syntax”).

    在ndb 7.5.1及更高版本中,设置新表的默认列格式和行格式(请参见第13.1.18节“创建表语法”)。

    In NDB 7.5.1, the default for this variable was DYNAMIC; in NDB 7.5.4, the default was changed to FIXED to maintain backwards compatibility with older release series (Bug #24487363).

    在ndb 7.5.1中,此变量的默认值为dynamic;在ndb7.5.4中,默认值更改为fixed,以保持与旧版本系列的向后兼容性(错误24487363)。

  • ndb_deferred_constraints

    延迟约束

    Table 21.240 Type and value information for ndb_deferred_constraints

    表21.240 ndb_deferred_约束的类型和值信息

    Property Value
    Name ndb_deferred_constraints
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range 0 / 0 - 1 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Specifies that constraint checks should be deferred (where these are supported). Not normally needed or used; for testing purposes only.

    描述:指定应延迟约束检查(在支持这些检查的情况下)。通常不需要或不使用;仅用于测试目的。


    Controls whether or not constraint checks are deferred, where these are supported. 0 is the default.

    控制是否延迟约束检查(在支持这些检查的位置)。0是默认值。

    This variable is not normally needed for operation of NDB Cluster or NDB Cluster Replication, and is intended primarily for use in testing.

    此变量通常不需要用于ndb群集或ndb群集复制的操作,主要用于测试。

  • ndb_distribution

    ndb_分布

    Table 21.241 Type and value information for ndb_distribution

    表21.241 ndb_分布的类型和值信息

    Property Value
    Name ndb_distribution
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range KEYHASH / LINHASH, KEYHASH (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Default distribution for new tables in NDBCLUSTER (KEYHASH or LINHASH, default is KEYHASH)

    描述:ndbcluster中新表的默认分布(keyhash或linhash,默认为keyhash)


    Controls the default distribution method for NDB tables. Can be set to either of KEYHASH (key hashing) or LINHASH (linear hashing). KEYHASH is the default.

    控制ndb表的默认分发方法。可以设置为keyhash(密钥哈希)或linhash(线性哈希)。keyhash是默认值。

  • ndb_eventbuffer_free_percent

    ndb_eventbuffer_free_百分比

    Table 21.242 Type and value information for ndb_eventbuffer_free_percent

    表21.242 ndb_eventbuffer_free_percent的类型和值信息

    Property Value
    Name ndb_eventbuffer_free_percent
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range 20 / 1 - 99 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Percentage of free memory that should be available in event buffer before resumption of buffering, after reaching limit set by ndb_eventbuffer_max_alloc

    描述:在达到ndb_event buffer_max_alloc设置的限制后,在恢复缓冲之前,事件缓冲区中应可用的可用内存百分比


    Sets the percentage of the maximum memory allocated to the event buffer (ndb_eventbuffer_max_alloc) that should be available in event buffer after reaching the maximum, before starting to buffer again.

    设置分配给事件缓冲区(NdByEnthBuffReMax)的最大内存百分比,在到达最大缓冲区之后,可在事件缓冲区中可用,然后再重新缓冲。

  • ndb_eventbuffer_max_alloc

    ndb_eventbuffer_max_分配

    Table 21.243 Type and value information for ndb_eventbuffer_max_alloc

    表21.243 ndb_eventbuffer_max_alloc的类型和值信息

    Property Value
    Name ndb_eventbuffer_max_alloc
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range 0 / 0 - 4294967295 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Maximum memory that can be allocated for buffering events by the NDB API. Defaults to 0 (no limit).

    描述:可以通过NDB API分配缓冲事件的最大内存。默认为0(无限制)。


    Sets the maximum amount memory (in bytes) that can be allocated for buffering events by the NDB API. 0 means that no limit is imposed, and is the default.

    设置可由NDB API分配缓冲事件的最大内存量(字节)。0表示没有限制,并且是默认值。

  • ndb_extra_logging

    ndb_额外记录

    Table 21.244 Type and value information for ndb_extra_logging

    表21.244 ndb_extra_记录的类型和值信息

    Property Value
    Name ndb_extra_logging
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range 0 / - (Version: NDB 7.5-7.6)
    Default, Range 1 / - (Version: 5.1.19-ndb-6.3.0)
    Notes

    DESCRIPTION: Controls logging of NDB Cluster schema, connection, and data distribution events in the MySQL error log

    description:控制mysql错误日志中ndb集群架构、连接和数据分发事件的日志记录


    This variable enables recording in the MySQL error log of information specific to the NDB storage engine.

    此变量允许在mysql错误日志中记录特定于ndb存储引擎的信息。

    When this variable is set to 0, the only information specific to NDB that is written to the MySQL error log relates to transaction handling. If it set to a value greater than 0 but less than 10, NDB table schema and connection events are also logged, as well as whether or not conflict resolution is in use, and other NDB errors and information. If the value is set to 10 or more, information about NDB internals, such as the progress of data distribution among cluster nodes, is also written to the MySQL error log. The default is 1.

    当此变量设置为0时,写入mysql错误日志的特定于ndb的唯一信息与事务处理有关。如果设置为大于0但小于10的值,则还将记录ndb表架构和连接事件,以及是否正在使用冲突解决,以及其他ndb错误和信息。如果该值设置为10或更大,则有关ndb内部的信息(例如集群节点之间的数据分发进度)也会写入mysql错误日志。默认值为1。

  • ndb_force_send

    ndb_force_发送

    Table 21.245 Type and value information for ndb_force_send

    表21.245 ndb_force_send的类型和值信息

    Property Value
    Name ndb_force_send
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range TRUE (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Forces sending of buffers to NDB immediately, without waiting for other threads

    描述:强制立即将缓冲区发送到ndb,而不等待其他线程


    Forces sending of buffers to NDB immediately, without waiting for other threads. Defaults to ON.

    强制立即将缓冲区发送到ndb,而不等待其他线程。默认为打开。

  • ndb_fully_replicated

    完全复制的ndb

    Table 21.246 Type and value information for ndb_fully_replicated

    表21.246完全复制的ndb的类型和值信息

    Property Value
    Name ndb_fully_replicated
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range OFF (Version: 5.7.12-ndb-7-5-2)
    Notes

    DESCRIPTION: Whether new NDB tables are fully replicated

    描述:是否完全复制新的ndb表


    Determines whether new NDB tables are fully replicated. This setting can be overridden for an individual table using COMMENT="NDB_TABLE=FULLY_REPLICATED=..." in a CREATE TABLE or ALTER TABLE statement; see Section 13.1.18.10, “Setting NDB_TABLE Options”, for syntax and other information.

    确定是否完全复制新的ndb表。可以在CREATE TABLE或ALTER TABLE语句中使用comment=“ndb_table=fully_replicated=…”覆盖单个表的此设置;有关语法和其他信息,请参阅第13.1.18.10节“设置ndb_table options”。

    Added in NDB 7.5.2.

    在ndb 7.5.2中添加。

  • ndb_index_stat_enable

    ndb_index_stat_启用

    Table 21.247 Type and value information for ndb_index_stat_enable

    表21.247ndb_index_stat_enable的类型和值信息

    Property Value
    Name ndb_index_stat_enable
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Default, Range ON (Version: 5.5.15-ndb-7.2.1)
    Notes

    DESCRIPTION: Use NDB index statistics in query optimization

    description:在查询优化中使用ndb索引统计


    Use NDB index statistics in query optimization. The default is ON.

    在查询优化中使用ndb索引统计信息。默认设置为启用。

  • ndb_index_stat_option

    ndb_index_stat_选项

    Table 21.248 Type and value information for ndb_index_stat_option

    表21.248 ndb_index_stat_选项的类型和值信息

    Property Value
    Name ndb_index_stat_option
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range loop_enable=1000ms,loop_idle=1000ms,loop_busy=100ms, update_batch=1,read_batch=4,idle_batch=32,check_batch=8, check_delay=10m,delete_batch=8, clean_delay=1m,error_batch=4, error_delay=1m,evict_batch=8,evict_delay=1m,cache_limit=32M, cache_lowpct=90,zero_total=0 (Version: NDB 7.5-7.6)
    Default, Range loop_checkon=1000ms,loop_idle=1000ms,loop_busy=100ms, update_batch=1,read_batch=4,idle_batch=32,check_batch=32, check_delay=1m,delete_batch=8,clean_delay=0,error_batch=4, error_delay=1m,evict_batch=8,evict_delay=1m,cache_limit=32M, cache_lowpct=90 (Version: 5.1.56-ndb-7.1.17)
    Notes

    DESCRIPTION: Comma-separated list of tunable options for NDB index statistics; the list should contain no spaces

    描述:用于ndb索引统计信息的可调选项的逗号分隔列表;该列表不应包含空格


    This variable is used for providing tuning options for NDB index statistics generation. The list consist of comma-separated name-value pairs of option names and values, and this list must not contain any space characters.

    此变量用于为生成ndb索引统计信息提供优化选项。列表由选项名称和值的逗号分隔的名称-值对组成,并且此列表不能包含任何空格字符。

    Options not used when setting ndb_index_stat_option are not changed from their default values. For example, you can set ndb_index_stat_option = 'loop_idle=1000ms,cache_limit=32M'.

    设置ndb_index_stat_选项时未使用的选项不会从其默认值更改。例如,可以将ndb_index_stat_option设置为“loop_idle=1000ms,cache_limit=32m”。

    Time values can be optionally suffixed with h (hours), m (minutes), or s (seconds). Millisecond values can optionally be specified using ms; millisecond values cannot be specified using h, m, or s.) Integer values can be suffixed with K, M, or G.

    时间值可以选择用h(小时)、m(分钟)或s(秒)作为后缀。毫秒值可以选择使用m s指定;毫秒值不能使用h、m或s指定。)整数值可以用k、m或g作为后缀。

    The names of the options that can be set using this variable are shown in the table that follows. The table also provides brief descriptions of the options, their default values, and (where applicable) their minimum and maximum values.

    可以使用此变量设置的选项的名称显示在下表中。该表还提供了选项、默认值以及(在适用的情况下)它们的最小值和最大值的简要描述。

    Table 21.249 ndb_index_stat_option options and values

    表21.249 ndb_index_stat_option options和值

    Name Description Default/Units Minimum/Maximum
    loop_enable 1000 ms 0/4G
    loop_idle Time to sleep when idle 1000 ms 0/4G
    loop_busy Time to sleep when more work is waiting 100 ms 0/4G
    update_batch 1 0/4G
    read_batch 4 1/4G
    idle_batch 32 1/4G
    check_batch 8 1/4G
    check_delay How often to check for new statistics 10 m 1/4G
    delete_batch 8 0/4G
    clean_delay 1 m 0/4G
    error_batch 4 1/4G
    error_delay 1 m 1/4G
    evict_batch 8 1/4G
    evict_delay Clean LRU cache, from read time 1 m 0/4G
    cache_limit Maximum amount of memory in bytes used for cached index statistics by this mysqld; clean up the cache when this is exceeded. 32 M 0/4G
    cache_lowpct 90 0/100
    zero_total Setting this to 1 resets all accumulating counters in ndb_index_stat_status to 0. This option value is also reset to 0 when this is done. 0 0/1

  • ndb_join_pushdown

    ndb_join_下推

    Table 21.250 Type and value information for ndb_join_pushdown

    表21.250 ndb_join_下推的类型和值信息

    Property Value
    Name ndb_join_pushdown
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Both
    Dynamic Yes
    Type
    Default, Range TRUE (Version: 5.1.51-ndb-7.2.0)
    Notes

    DESCRIPTION: Enables pushing down of joins to data nodes

    描述:启用下推连接到数据节点


    This variable controls whether joins on NDB tables are pushed down to the NDB kernel (data nodes). Previously, a join was handled using multiple accesses of NDB by the SQL node; however, when ndb_join_pushdown is enabled, a pushable join is sent in its entirety to the data nodes, where it can be distributed among the data nodes and executed in parallel on multiple copies of the data, with a single, merged result being returned to mysqld. This can reduce greatly the number of round trips between an SQL node and the data nodes required to handle such a join.

    此变量控制是否将ndb表上的联接下推到ndb内核(数据节点)。以前,一个连接是由sql节点使用ndb的多个访问来处理的;但是,当ndb_join_pushdown被启用时,一个可推的连接被完整地发送到数据节点,在数据节点之间分发,并在数据的多个副本上并行执行,一个合并的结果被返回给mysqld。这可以大大减少sql节点和处理此类连接所需的数据节点之间的往返次数。

    By default, ndb_join_pushdown is enabled.

    默认情况下,启用ndb_join_下推。

    Conditions for NDB pushdown joins.  In order for a join to be pushable, it must meet the following conditions:

    ndb下推连接的条件。要使联接可推,它必须满足以下条件:

    1. Only columns can be compared, and all columns to be joined must use exactly the same data type.

      只能比较列,要联接的所有列必须使用完全相同的数据类型。

      This means that expressions such as t1.a = t2.a + constant cannot be pushed down, and that (for example) a join on an INT column and a BIGINT column also cannot be pushed down.

      这意味着不能向下推T1.A=T2.A+常量之类的表达式,也不能向下推(例如)int列和bigint列上的join。

    2. Queries referencing BLOB or TEXT columns are not supported.

      不支持引用blob或文本列的查询。

    3. Explicit locking is not supported; however, the NDB storage engine's characteristic implicit row-based locking is enforced.

      不支持显式锁定;但是,执行了ndb存储引擎的特征隐式基于行的锁定。

      This means that a join using FOR UPDATE cannot be pushed down.

      这意味着不能向下推用于更新的联接。

    4. In order for a join to be pushed down, child tables in the join must be accessed using one of the ref, eq_ref, or  const access methods, or some combination of these methods.

      为了下推连接,必须使用ref、eq-ref、const-access方法之一或这些方法的某些组合来访问连接中的子表。

      Outer joined child tables can only be pushed using eq_ref.

      外部连接的子表只能使用equ ref推送。

      If the root of the pushed join is an eq_ref or const, only child tables joined by eq_ref can be appended. (A table joined by ref is likely to become the root of another pushed join.)

      如果推式联接的根是eq-ref或const,则只能追加由eq-ref联接的子表。(由ref连接的表很可能成为另一个推式连接的根。)

      If the query optimizer decides on Using join cache for a candidate child table, that table cannot be pushed as a child. However, it may be the root of another set of pushed tables.

      如果查询优化器决定对候选子表使用联接缓存,则不能将该表作为子表推送。但是,它可能是另一组推送表的根。

    5. Joins referencing tables explicitly partitioned by [LINEAR] HASH, LIST, or RANGE currently cannot be pushed down.

      当前无法向下推引用由[线性]哈希、列表或范围显式分区的表的联接。

    You can see whether a given join can be pushed down by checking it with EXPLAIN; when the join can be pushed down, you can see references to the pushed join in the Extra column of the output, as shown in this example:

    您可以通过使用explain检查给定的联接是否可以向下推;当可以向下推联接时,您可以在输出的额外列中看到对被推联接的引用,如以下示例所示:

    mysql> EXPLAIN
        ->     SELECT e.first_name, e.last_name, t.title, d.dept_name
        ->         FROM employees e
        ->         JOIN dept_emp de ON e.emp_no=de.emp_no
        ->         JOIN departments d ON d.dept_no=de.dept_no
        ->         JOIN titles t ON e.emp_no=t.emp_no\G
    *************************** 1. row ***************************
               id: 1
      select_type: SIMPLE
            table: d
             type: ALL
    possible_keys: PRIMARY
              key: NULL
          key_len: NULL
              ref: NULL
             rows: 9
            Extra: Parent of 4 pushed join@1
    *************************** 2. row ***************************
               id: 1
      select_type: SIMPLE
            table: de
             type: ref
    possible_keys: PRIMARY,emp_no,dept_no
              key: dept_no
          key_len: 4
              ref: employees.d.dept_no
             rows: 5305
            Extra: Child of 'd' in pushed join@1
    *************************** 3. row ***************************
               id: 1
      select_type: SIMPLE
            table: e
             type: eq_ref
    possible_keys: PRIMARY
              key: PRIMARY
          key_len: 4
              ref: employees.de.emp_no
             rows: 1
            Extra: Child of 'de' in pushed join@1
    *************************** 4. row ***************************
               id: 1
      select_type: SIMPLE
            table: t
             type: ref
    possible_keys: PRIMARY,emp_no
              key: emp_no
          key_len: 4
              ref: employees.de.emp_no
             rows: 19
            Extra: Child of 'e' in pushed join@1
    4 rows in set (0.00 sec)
    
    Note

    If inner joined child tables are joined by ref, and the result is ordered or grouped by a sorted index, this index cannot provide sorted rows, which forces writing to a sorted tempfile.

    如果内部联接的子表是通过ref联接的,并且结果是按排序的索引排序或分组的,则此索引不能提供排序的行,这将强制写入排序的tempfile。

    Two additional sources of information about pushed join performance are available:

    另外还有两个有关推送联接性能的信息源:

    1. The status variables Ndb_pushed_queries_defined, Ndb_pushed_queries_dropped, Ndb_pushed_queries_executed, and Ndb_pushed_reads.

      状态变量ndb_pushed_querys_defined、ndb_pushed_querys_dropped、ndb_pushed_querys_executed和ndb_pushed_reads。

    2. The counters in the ndbinfo.counters table that belong to the DBSPJ kernel block. See Section 21.5.10.10, “The ndbinfo counters Table”, for information about these counters. See also The DBSPJ Block, in the NDB Cluster API Developer Guide.

      ndbinfo.counters表中属于dbspj内核块的计数器。有关这些计数器的信息,请参阅第21.5.10.10节“ndbinfo计数器表”。另请参阅ndb cluster api developer guide中的dbspj块。

  • ndb_log_apply_status

    ndb_log_apply_状态

    Table 21.251 Type and value information for ndb_log_apply_status

    表21.251 ndb_log_apply_状态的类型和值信息

    Property Value
    Name ndb_log_apply_status
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Whether or not a MySQL server acting as a slave logs mysql.ndb_apply_status updates received from its immediate master in its own binary log, using its own server ID

    描述:作为从属服务器的mysql服务器是否使用自己的服务器id在自己的二进制日志中应用从直接主服务器收到的状态更新mysql.ndb_


    A read-only variable which shows whether the server was started with the --ndb-log-apply-status option.

    一个只读变量,显示服务器是否使用--ndb log apply status选项启动。

  • ndb_log_bin

    数据库

    Table 21.252 Type and value information for ndb_log_bin

    表21.252 ndb_log_bin的类型和值信息

    Property Value
    Name ndb_log_bin
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File No
    Scope Both
    Dynamic Yes
    Type
    Default, Range ON (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Write updates to NDB tables in the binary log. Effective only if binary logging is enabled with --log-bin.

    描述:在二进制日志中写入对ndb表的更新。仅当使用--log bin启用二进制日志记录时有效。


    Causes updates to NDB tables to be written to the binary log. Setting this variable has no effect if binary logging is not already enabled for the server using log_bin. ndb_log_bin defaults to 1 (ON); normally, there is never any need to change this value in a production environment.

    使对ndb表的更新写入二进制日志。如果尚未为使用Log_bin的服务器启用二进制日志记录,则设置此变量无效。ndb_log_bin默认为1(开);通常,在生产环境中不需要更改此值。

  • ndb_log_binlog_index

    ndb_log_binlog_索引

    Table 21.253 Type and value information for ndb_log_binlog_index

    表21.253 ndb_log_binlog_索引的类型和值信息

    Property Value
    Name ndb_log_binlog_index
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File No
    Scope Global
    Dynamic Yes
    Type
    Default, Range ON (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Insert mapping between epochs and binary log positions into the ndb_binlog_index table. Defaults to ON. Effective only if binary logging is enabled on the server.

    描述:将纪元和二进制日志位置之间的映射插入到ndb_binlog_索引表中。默认为打开。只有在服务器上启用二进制日志记录时才有效。


    Causes a mapping of epochs to positions in the binary log to be inserted into the ndb_binlog_index table. Setting this variable has no effect if binary logging is not already enabled for the server using log_bin. (In addition, ndb_log_bin must not be disabled.) ndb_log_binlog_index defaults to 1 (ON); normally, there is never any need to change this value in a production environment.

    导致将纪元到二进制日志中位置的映射插入到ndb-binlog-u索引表中。如果尚未为使用Log_bin的服务器启用二进制日志记录,则设置此变量无效。(此外,不能禁用ndb_log_bin。)ndb_log_binlog_index默认为1(on);通常,在生产环境中不需要更改此值。

  • ndb_log_empty_epochs

    ndb_log_empty_时期

    Table 21.254 Type and value information for ndb_log_empty_epochs

    表21.254 ndb_log_empty_epochs的类型和值信息

    Property Value
    Name ndb_log_empty_epochs
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: When enabled, epochs in which there were no changes are written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

    描述:启用时,即使启用了日志从更新,也不会将没有更改的时期写入ndb_apply_状态和ndb_binlog_索引表。


    When this variable is set to 0, epoch transactions with no changes are not written to the binary log, although a row is still written even for an empty epoch in ndb_binlog_index.

    当此变量设置为0时,没有更改的epoch事务不会写入二进制日志,尽管即使ndb_binlog_索引中的epoch为空,仍会写入行。

  • ndb_log_empty_update

    ndb_log_empty_更新

    Table 21.255 Type and value information for ndb_log_empty_update

    表21.255 ndb_log_empty_update的类型和值信息

    Property Value
    Name ndb_log_empty_update
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: When enabled, updates which produce no changes are written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

    描述:启用时,即使启用了日志从更新,也不会将不产生更改的更新写入ndb_apply_状态和ndb_binlog_索引表。


    When this variable is set to ON (1), update transactions with no changes are written to the binary log, even when log_slave_updates is enabled.

    当此变量设置为on(1)时,即使启用了log_slave_updates,也不会将没有更改的更新事务写入二进制日志。

  • ndb_log_exclusive_reads

    ndb_logu_exclusive_读取

    Table 21.256 Type and value information for ndb_log_exclusive_reads

    表21.256 ndb_log_exclusive_reads的类型和值信息

    Property Value
    Name ndb_log_exclusive_reads
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range 0 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Log primary key reads with exclusive locks; allow conflict resolution based on read conflicts

    描述:使用独占锁记录主键读取;允许基于读取冲突解决冲突


    This variable determines whether primary key reads are logged with exclusive locks, which allows for NDB Cluster Replication conflict detection and resolution based on read conflicts. To enable these locks, set the value of ndb_log_exclusive_reads to 1. 0, which disables such locking, is the default.

    此变量确定是否使用独占锁记录主键读取,这允许基于读取冲突检测和解决ndb群集复制冲突。要启用这些锁,请将ndb_log_exclusive_reads的值设置为1。0是默认值,它禁用此类锁定。

    For more information, see Read conflict detection and resolution.

    有关详细信息,请参阅读取冲突检测和解决。

  • ndb_log_orig

    原木

    Table 21.257 Type and value information for ndb_log_orig

    表21.257 ndb原木的类型和值信息

    Property Value
    Name ndb_log_orig
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Whether the id and epoch of the originating server are recorded in the mysql.ndb_binlog_index table. Set using the --ndb-log-orig option when starting mysqld.

    description:发起服务器的id和epoch是否记录在mysql.ndb_binlog_索引表中。启动mysqld时使用--ndb log orig选项进行设置。


    Shows whether the originating server ID and epoch are logged in the ndb_binlog_index table. Set using the --ndb-log-orig server option.

    显示原始服务器id和epoch是否记录在ndb-binlog-u索引表中。使用--ndb log orig server选项设置。

  • ndb_log_transaction_id

    ndb_log_事务ID

    Table 21.258 Type and value information for ndb_log_transaction_id

    表21.258 ndb_log_transaction_id的类型和值信息

    Property Value
    Name ndb_log_transaction_id
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Global
    Dynamic No
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Whether NDB transaction IDs are written into the binary log (Read-only.)

    描述:是否将ndb事务id写入二进制日志(只读)。


    This read-only, Boolean system variable shows whether a slave mysqld writes NDB transaction IDs in the binary log (required to use active-active NDB Cluster Replication with NDB$EPOCH_TRANS() conflict detection). To change the setting, use the --ndb-log-transaction-id option.

    此只读布尔系统变量显示从属mysqld是否在二进制日志中写入ndb事务ID(需要使用“active active”ndb cluster replication和ndb$epoch_trans()冲突检测)。要更改设置,请使用--ndb log transaction id选项。

    ndb_log_transaction_id is not supported in mainline MySQL Server 5.7.

    主线MySQL Server 5.7不支持ndb_log_transaction_id。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • ndb_optimized_node_selection

    ndb_优化的节点选择

    Table 21.259 Type and value information for ndb_optimized_node_selection

    表21.259 ndb_optimized_node_选择的类型和值信息

    Property Value
    Name ndb_optimized_node_selection
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range ON (Version: NDB 7.5-7.6)
    Default, Range 3 / 0 - 3 (Version: 5.1.22-ndb-6.3.4)
    Notes

    DESCRIPTION: Determines how an SQL node chooses a cluster data node to use as transaction coordinator

    描述:确定SQL节点如何选择要用作事务协调器的群集数据节点


    There are two forms of optimized node selection, described here:

    有两种形式的优化节点选择,如下所述:

    1. The SQL node uses promixity to determine the transaction coordinator; that is, the closest data node to the SQL node is chosen as the transaction coordinator. For this purpose, a data node having a shared memory connection with the SQL node is considered to be closest to the SQL node; the next closest (in order of decreasing proximity) are: TCP connection to localhost, followed by TCP connection from a host other than localhost.

      SQL节点使用PROMISSION来确定事务协调器;也就是说,SQL节点的“最接近”数据节点被选择为事务协调器。为此,与SQL节点具有共享存储器连接的数据节点被认为是“最接近”SQL节点;下一个最近(以减小的邻近度为顺序)是:TCP连接到本地主机,接着是来自本地主机以外的主机的TCP连接。

    2. The SQL thread uses distribution awareness to select the data node. That is, the data node housing the cluster partition accessed by the first statement of a given transaction is used as the transaction coordinator for the entire transaction. (This is effective only if the first statement of the transaction accesses no more than one cluster partition.)

      sql线程使用分布感知来选择数据节点。也就是说,包含由给定事务的第一条语句访问的集群分区的数据节点用作整个事务的事务协调器。(仅当事务的第一条语句访问的群集分区不超过一个时,此选项才有效。)

    This option takes one of the integer values 0, 1, 2, or 3. 3 is the default. These values affect node selection as follows:

    此选项采用整数值0、1、2或3之一。3是默认值。这些值影响节点选择,如下所示:

    • 0: Node selection is not optimized. Each data node is employed as the transaction coordinator 8 times before the SQL thread proceeds to the next data node.

      0:节点选择未优化。在sql线程进入下一个数据节点之前,每个数据节点被用作事务协调器8次。

    • 1: Proximity to the SQL node is used to determine the transaction coordinator.

      1:接近SQL节点用于确定事务协调器。

    • 2: Distribution awareness is used to select the transaction coordinator. However, if the first statement of the transaction accesses more than one cluster partition, the SQL node reverts to the round-robin behavior seen when this option is set to 0.

      2:分配感知用于选择事务协调器。但是,如果事务的第一条语句访问多个群集分区,则SQL节点将恢复到该选项设置为0时的循环行为。

    • 3: If distribution awareness can be employed to determine the transaction coordinator, then it is used; otherwise proximity is used to select the transaction coordinator. (This is the default behavior.)

      3:如果可以使用分布感知来确定事务协调器,则使用它,否则使用邻近度来选择事务协调器。(这是默认行为。)

    Proximity is determined as follows:

    邻近性确定如下:

    1. Start with the value set for the Group parameter (default 55).

      从为组参数设置的值开始(默认值55)。

    2. For an API node sharing the same host with other API nodes, decrement the value by 1. Assuming the default value for Group, the effective value for data nodes on same host as the API node is 54, and for remote data nodes 55.

      对于与其他api节点共享同一主机的api节点,将该值减小1。假设group为默认值,则与api节点位于同一主机上的数据节点的有效值为54,远程数据节点的有效值为55。

    3. (NDB 7.5.2 and later:) Setting ndb_data_node_neighbour further decreases the effective Group value by 50, causing this node to be regarded as the nearest node. This is needed only when all data nodes are on hosts other than that hosts the API node and it is desirable to dedicate one of them to the API node. In normal cases, the default adjustment described previously is sufficient.

      (ndb 7.5.2及更高版本:)设置ndb_data_node_neighbor将有效组值进一步减少50,从而使该节点被视为最近的节点。只有当所有数据节点都在承载api节点的主机上时,才需要这样做,并且希望将其中一个节点专用于api节点。在正常情况下,前面描述的默认调整就足够了。

    Frequent changes in ndb_data_node_neighbour are not advisable, since this changes the state of the cluster connection and thus may disrupt the selection algorithm for new transactions from each thread until it stablilizes.

    频繁更改ndb_data_node_neighbor是不可取的,因为这会更改集群连接的状态,因此可能会中断每个线程中新事务的选择算法,直到它稳定下来。

  • ndb_read_backup

    ndb_read_备份

    Table 21.260 Type and value information for ndb_read_backup

    表21.260 ndb_read_备份的类型和值信息

    Property Value
    Name ndb_read_backup
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range OFF (Version: 5.7.12-ndb-7.5.2)
    Notes

    DESCRIPTION: Enable read from any replica

    描述:启用从任何副本读取


    Enable read from any replica for any NDB table subsequently created.

    为随后创建的任何ndb表启用从任何副本读取。

    Added in NDB 7.5.2.

    在ndb 7.5.2中添加。

  • ndb_recv_thread_activation_threshold

    ndb_recv_thread_activation_阈值

    Table 21.261 Type and value information for ndb_recv_thread_activation_threshold

    表21.261 ndb_recv_thread_activation_threshold的类型和值信息

    Property Value
    Name ndb_recv_thread_activation_threshold
    Command Line No
    System Variable No
    Status Variable No
    Option File No
    Scope
    Dynamic No
    Type
    Default, Range 8 / 0 (MIN_ACTIVATION_THRESHOLD) - 16 (MAX_ACTIVATION_THRESHOLD) (Version: 5.6.10-ndb-7.3.1)
    Notes

    DESCRIPTION: Activation threshold when receive thread takes over the polling of the cluster connection (measured in concurrently active threads)

    描述:当接收线程接管群集连接的轮询时的激活阈值(以并发活动线程度量)


    When this number of concurrently active threads is reached, the receive thread takes over polling of the cluster connection.

    当达到此并发活动线程数时,接收线程将接管群集连接的轮询。

    This variable is global in scope. It can also be set on startup using the --ndb-recv-thread-activation-threshold option.

    此变量在作用域中是全局的。它也可以在启动时使用--ndb recv thread activation threshold选项设置。

  • ndb_recv_thread_cpu_mask

    ndb_recv_thread_cpu_掩码

    Table 21.262 Type and value information for ndb_recv_thread_cpu_mask

    表21.262 ndb_recv_thread_cpu_mask的类型和值信息

    Property Value
    Name ndb_recv_thread_cpu_mask
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Global
    Dynamic Yes
    Type
    Default, Range [empty] (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: CPU mask for locking receiver threads to specific CPUs; specified as hexadecimal. See documentation for details.

    描述:用于将接收器线程锁定到特定CPU的CPU掩码;指定为十六进制。有关详细信息,请参见文档。


    CPU mask for locking receiver threads to specific CPUs. This is specified as a hexadecimal bitmask. For example, 0x33 means that one CPU is used per receiver thread. An empty string is the default; setting ndb_recv_thread_cpu_mask to this value removes any receiver thread locks previously set.

    用于将接收器线程锁定到特定CPU的CPU掩码。它被指定为十六进制位掩码。例如,0x33表示每个接收器线程使用一个CPU。默认为空字符串;将ndb_recv_thread_cpu_mask设置为该值将删除先前设置的所有接收器线程锁。

    This variable is global in scope. It can also be set on startup using the --ndb-recv-thread-cpu-mask option.

    此变量在作用域中是全局的。它也可以在启动时使用--ndb recv thread cpu mask选项设置。

  • ndb_report_thresh_binlog_epoch_slip

    ndb_report_thresh_binlog_epoch_滑动

    Table 21.263 Type and value information for ndb_report_thresh_binlog_epoch_slip

    表21.263 ndb_report_thresh_binlog_epoch_slip的类型和值信息

    Property Value
    Name ndb_report_thresh_binlog_epoch_slip
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range 3 / 0 - 256 (Version: NDB 7.5-7.6)
    Default, Range 10 / 0 - 256 (Version: 5.7.16-ndb-7.5.4)
    Notes

    DESCRIPTION: NDB 7.5.4 and later: Threshold for number of epochs completely buffered, but not yet consumed by binlog injector thread which when exceeded generates BUFFERED_EPOCHS_OVER_THRESHOLD event buffer status message; prior to NDB 7.5.4: Threshold for number of epochs to lag behind before reporting binary log status

    描述:ndb 7.5.4及更高版本:binlog注入器线程已完全缓冲但尚未消耗的存储点数阈值,超过该阈值时,binlog注入器线程将生成缓冲存储点数超过阈值的事件缓冲区状态消息;ndb7.5.4之前:在报告二进制日志状态之前要滞后的存储点数阈值


    In NDB 7.5.4 and later, this represents the threshold for the number of epochs completely buffered in the event buffer, but not yet consumed by the binlog injector thread. When this degree of slippage (lag) is exceeded, an event buffer status message is reported, with BUFFERED_EPOCHS_OVER_THRESHOLD supplied as the reason (see Section 21.5.7.3, “Event Buffer Reporting in the Cluster Log”). Slip is increased when an epoch is received from data nodes and buffered completely in the event buffer; it is decreased when an epoch is consumed by the binlog injector thread, it is reduced. Empty epochs are buffered and queued, and so included in this calculation only when this is enabled using the Ndb::setEventBufferQueueEmptyEpoch() method from the NDB API.

    在ndb 7.5.4及更高版本中,这表示事件缓冲区中完全缓冲但尚未被binlog注入程序线程消耗的时间段数的阈值。当超过此滑动程度(滞后)时,将报告一条事件缓冲区状态消息,并以缓冲的时间段超过阈值作为原因(请参阅第21.5.7.3节,“集群日志中的事件缓冲区报告”)。当epoch从数据节点接收并在事件缓冲区中完全缓冲时,slip增加;当epoch被binlog注入程序线程消耗时,slip减少。空的epoch将被缓冲和排队,因此只有在使用ndb api中的ndb::setEventBufferQueueEmptyEPoch()方法启用时才会包含在该计算中。

    Prior to NDB 7.5.4, the value of this vairable served as a threshold for the number of epochs to be behind before reporting binary log status. In these previous releases, a value of 3—the default—means that if the difference between which epoch has been received from the storage nodes and which epoch has been applied to the binary log is 3 or more, a status message is then sent to the cluster log.

    在ndb 7.5.4之前,此值用作在报告二进制日志状态之前要落后的时间段数的阈值。在这些早期版本中,值3(默认值)意味着,如果从存储节点接收到的epoch与应用到二进制日志的epoch之间的差值为3或3以上,则状态消息将发送到群集日志。

  • ndb_report_thresh_binlog_mem_usage

    ndb_report_thresh_binlog_mem_用法

    Table 21.264 Type and value information for ndb_report_thresh_binlog_mem_usage

    表21.264 ndb_report_thresh_binlog_mem_usage的类型和值信息

    Property Value
    Name ndb_report_thresh_binlog_mem_usage
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type boolean
    Default, Range 1 / 0 - 1 (Version: NDB 7.6.8)
    Notes

    DESCRIPTION: This is a threshold on the percentage of free memory remaining before reporting binary log status

    描述:这是报告二进制日志状态前剩余可用内存百分比的阈值


    This is a threshold on the percentage of free memory remaining before reporting binary log status. For example, a value of 10 (the default) means that if the amount of available memory for receiving binary log data from the data nodes falls below 10%, a status message is sent to the cluster log.

    这是报告二进制日志状态前剩余可用内存百分比的阈值。例如,值10(默认值)表示,如果用于从数据节点接收二进制日志数据的可用内存量低于10%,则会向群集日志发送一条状态消息。

  • ndb_row_checksum

    ndb_row_校验和

    Table 21.265 Type and value information for ndb_row_checksum

    表21.265 ndb_row_校验和的类型和值信息

    Property Value
    Name ndb_row_checksum
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type integer
    Default, Range 1 / 0 - 1 (Version: NDB 7.6.8)
    Notes

    DESCRIPTION: When enabled, set row checksums; enabled by default

    描述:启用时,设置行校验和;默认启用


    Traditionally, NDB has created tables with row checksums, which checks for hardware issues at the expense of performance. Setting ndb_row_checksum to 0 means that row checksums are not used for new or altered tables, which has a significant impact on performance for all types of queries. This variable is set to 1 by default, to provide backward-compatible behavior.

    传统上,ndb使用行校验和创建表,以牺牲性能为代价检查硬件问题。将ndb_row_checksum设置为0意味着行校验和不用于新表或已更改的表,这对所有类型查询的性能都有重大影响。此变量默认设置为1,以提供向后兼容的行为。

  • ndb_show_foreign_key_mock_tables

    ndb_show_foreign_key_mock_表

    Table 21.266 Type and value information for ndb_show_foreign_key_mock_tables

    表21.266 ndb-show-foreign-key-mock表的类型和值信息

    Property Value
    Name ndb_show_foreign_key_mock_tables
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Show the mock tables used to support foreign_key_checks=0

    描述:显示用于支持外部密钥检查的模拟表=0


    Show the mock tables used by NDB to support foreign_key_checks=0. When this is enabled, extra warnings are shown when creating and dropping the tables. The real (internal) name of the table can be seen in the output of SHOW CREATE TABLE.

    显示ndb用于支持外部密钥检查的模拟表=0。如果启用此选项,则在创建和删除表时会显示额外的警告。表的真实(内部)名称可以在show create table的输出中看到。

  • ndb_slave_conflict_role

    ndb_slave_conflict_角色

    Table 21.267 Type and value information for ndb_slave_conflict_role

    表21.267 ndb_slave_conflict_角色的类型和值信息

    Property Value
    Name ndb_slave_conflict_role
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range NONE / NONE, PRIMARY, SECONDARY, PASS (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Role for slave to play in conflict detection and resolution. Value is one of PRIMARY, SECONDARY, PASS, or NONE (default). Can be changed only when slave SQL thread is stopped. See documentation for further information.

    描述:从机在冲突检测和解决中的作用。值是primary、secondary、pass或none(默认值)之一。只有在从SQL线程停止时才能更改。有关更多信息,请参阅文档。


    Determine the role of this SQL node (and NDB Cluster) in a circular (active-active) replication setup. ndb_slave_conflict_role can take any one of the values PRIMARY, SECONDARY, PASS, or NULL (the default). The slave SQL thread must be stopped before you can change ndb_slave_conflict_role. In addition, it is not possible to change directly between PASS and either of PRIMARY or SECONDARY directly; in such cases, you must ensure that the SQL thread is stopped, then execute SET @@GLOBAL.ndb_slave_conflict_role = 'NONE' first.

    确定此SQL节点(和NDB群集)在循环(“活动-活动”)复制设置中的角色。ndb_slave_conflict_role可以采用primary、secondary、pass或null(默认值)中的任何一个值。必须先停止从属SQL线程,然后才能更改ndb_slave_conflict_角色。此外,不能直接在pass和primary或secondary之间进行更改;在这种情况下,必须确保SQL线程已停止,然后首先执行set@@global.ndb_slave_conflict_role='none'。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • ndb_table_no_logging

    ndb_table_no_日志记录

    Table 21.268 Type and value information for ndb_table_no_logging

    表21.268 ndb_table_no_记录的类型和值信息

    Property Value
    Name ndb_table_no_logging
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Session
    Dynamic Yes
    Type
    Default, Range FALSE (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: NDB tables created when this setting is enabled are not checkpointed to disk (although table schema files are created). The setting in effect when the table is created with or altered to use NDBCLUSTER persists for the lifetime of the table.

    描述:启用此设置时创建的ndb表不会检查到磁盘(尽管创建了表架构文件)。使用ndbcluster创建表或更改为使用ndbcluster时生效的设置在表的生命周期内保持有效。


    When this variable is set to ON or 1, it causes NDB tables not to be checkpointed to disk. More specifically, this setting applies to tables which are created or altered using ENGINE NDB when ndb_table_no_logging is enabled, and continues to apply for the lifetime of the table, even if ndb_table_no_logging is later changed. Suppose that A, B, C, and D are tables that we create (and perhaps also alter), and that we also change the setting for ndb_table_no_logging as shown here:

    当这个变量被设置为on或1时,它会导致ndb表不被检查到磁盘。更具体地说,此设置适用于在启用ndb_table_no_logging时使用引擎ndb创建或更改的表,并在表的生存期内继续应用,即使以后更改了ndb_table_no_logging。假设a、b、c和d是我们创建的表(可能也会更改),并且我们还更改了ndb_table_no_日志记录的设置,如下所示:

    SET @@ndb_table_no_logging = 1;
    
    CREATE TABLE A ... ENGINE NDB;
    
    CREATE TABLE B ... ENGINE MYISAM;
    CREATE TABLE C ... ENGINE MYISAM;
    
    ALTER TABLE B ENGINE NDB;
    
    SET @@ndb_table_no_logging = 0;
    
    CREATE TABLE D ... ENGINE NDB;
    ALTER TABLE C ENGINE NDB;
    
    SET @@ndb_table_no_logging = 1;
    

    After the previous sequence of events, tables A and B are not checkpointed; A was created with ENGINE NDB and B was altered to use NDB, both while ndb_table_no_logging was enabled. However, tables C and D are logged; C was altered to use NDB and D was created using ENGINE NDB, both while ndb_table_no_logging was disabled. Setting ndb_table_no_logging back to 1 or ON does not cause table C or D to be checkpointed.

    在前面的事件序列之后,表A和表B没有检查点;A是用引擎ndb创建的,B被更改为使用ndb,而ndb_table_no_logging是启用的。但是,表c和d被记录;c被更改为使用ndb,d是使用引擎ndb创建的,而ndb_table_no_日志记录被禁用。设置ndb_table_no_logging back为1或on不会导致表c或d被检查点。

    Note

    ndb_table_no_logging has no effect on the creation of NDB table schema files; to suppress these, use ndb_table_temporary instead.

    ndb_table_no_日志记录对ndb table schema文件的创建没有影响;若要禁止这些文件,请改用ndb_table_temporary。

  • ndb_table_temporary

    临时表

    Table 21.269 Type and value information for ndb_table_temporary

    表21.269 ndb_的类型和值信息table_temporary

    Property Value
    Name ndb_table_temporary
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Session
    Dynamic Yes
    Type
    Default, Range FALSE (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: NDB tables are not persistent on disk: no schema files are created and the tables are not logged

    描述:磁盘上的ndb表不是持久的:没有创建架构文件,也没有记录表


    When set to ON or 1, this variable causes NDB tables not to be written to disk: This means that no table schema files are created, and that the tables are not logged.

    当设置为on或1时,此变量将导致ndb表不写入磁盘:这意味着不创建表架构文件,并且不记录表。

    Note

    Setting this variable currently has no effect. This is a known issue; see Bug #34036.

    设置此变量当前无效。这是一个已知问题;请参阅错误34036。

  • ndb_use_copying_alter_table

    ndb_use_copying_alter_表

    Table 21.270 Type and value information for ndb_use_copying_alter_table

    表21.270 ndb_use_copying_alter_table的类型和值信息

    Property Value
    Name ndb_use_copying_alter_table
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Both
    Dynamic No
    Type
    Notes

    DESCRIPTION: Use copying ALTER TABLE operations in NDB Cluster

    描述:在ndb集群中使用复制alter table操作


    Forces NDB to use copying of tables in the event of problems with online ALTER TABLE operations. The default value is OFF.

    强制ndb在联机alter table操作出现问题时使用表的复制。默认值为“关”。

  • ndb_use_exact_count

    使用精确计数

    Table 21.271 Type and value information for ndb_use_exact_count

    表21.271 ndb_use_exact_count的类型和值信息

    Property Value
    Name ndb_use_exact_count
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Both
    Dynamic Yes
    Type
    Default, Range ON (Version: NDB 7.5-7.6)
    Default, Range OFF (Version: 5.1.47-ndb-7.1.8)
    Notes

    DESCRIPTION: Use exact row count when planning queries

    描述:规划查询时使用精确的行计数


    Forces NDB to use a count of records during SELECT COUNT(*) query planning to speed up this type of query. The default value is OFF, which allows for faster queries overall.

    强制ndb在select count(*)查询规划期间使用记录计数来加速此类查询。默认值为off,这允许整体更快的查询。

  • ndb_use_transactions

    ndb_use_事务

    Table 21.272 Type and value information for ndb_use_transactions

    表21.272 ndb_use_事务的类型和值信息

    Property Value
    Name ndb_use_transactions
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Both
    Dynamic Yes
    Type
    Default, Range ON (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Forces NDB to use a count of records during SELECT COUNT(*) query planning to speed up this type of query

    描述:强制ndb在select count(*)查询规划期间使用记录计数来加速此类查询


    You can disable NDB transaction support by setting this variable's values to OFF (not recommended). The default is ON.

    通过将此变量的值设置为off(不推荐),可以禁用ndb事务支持。默认设置为启用。

  • ndb_version

    ndb_版本

    Table 21.273 Type and value information for ndb_version

    表21.273 ndb_版本的类型和值信息

    Property Value
    Name ndb_version
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Global
    Dynamic No
    Type
    Default, Range (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Shows build and NDB engine version as an integer

    描述:将生成和ndb引擎版本显示为整数


    NDB engine version, as a composite integer.

    ndb引擎版本,作为复合整数。

  • ndb_version_string

    ndb_version_字符串

    Table 21.274 Type and value information for ndb_version_string

    表21.274ndb_version_字符串的类型和值信息

    Property Value
    Name ndb_version_string
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Global
    Dynamic No
    Type
    Default, Range (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Shows build information including NDB engine version in ndb-x.y.z format

    描述:显示生成信息,包括ndb-x.y.z格式的ndb引擎版本


    NDB engine version in ndb-x.y.z format.

    ndb-x.y.z格式的ndb引擎版本。

  • server_id_bits

    服务器ID位

    Table 21.275 Type and value information for server_id_bits

    表21.275服务器id位的类型和值信息

    Property Value
    Name server_id_bits
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic No
    Type
    Default, Range 32 / 7 - 32 (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: The effective value of server_id if the server was started with the --server-id-bits option set to a nondefault value

    描述:如果服务器是在--server id bits选项设置为非默认值的情况下启动的,则服务器ID的有效值


    The effective value of server_id if the server was started with the --server-id-bits option set to a nondefault value.

    如果服务器是在--server id bits选项设置为非默认值的情况下启动的,则服务器ID的有效值。

    If the value of server_id greater than or equal to 2 to the power of server_id_bits, mysqld refuses to start.

    如果服务器id的值大于或等于服务器id的2,则mysqld拒绝启动。

    This system variable is supported only by NDB Cluster. server_id_bits is not supported by the standard MySQL Server.

    此系统变量仅由ndb cluster支持。标准MySQL服务器不支持服务器ID位。

  • slave_allow_batching

    从允许批处理

    Table 21.276 Type and value information for slave_allow_batching

    表21.276从允许批处理的类型和值信息

    Property Value
    Name slave_allow_batching
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File Yes
    Scope Global
    Dynamic Yes
    Type
    Default, Range off (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Turns update batching on and off for a replication slave

    描述:为复制从机打开和关闭更新批处理


    Whether or not batched updates are enabled on NDB Cluster replication slaves.

    是否在ndb群集复制从机上启用了批处理更新。

    Setting this variable has an effect only when using replication with the NDB storage engine; in MySQL Server 5.7, it is present but does nothing. For more information, see Section 21.6.6, “Starting NDB Cluster Replication (Single Replication Channel)”.

    设置此变量仅在与ndb存储引擎一起使用复制时才有效;在mysql server 5.7中,它是存在的,但不执行任何操作。有关更多信息,请参阅21.6.6节,“启动NDB群集复制(单个复制通道)”。

  • transaction_allow_batching

    事务处理允许批处理

    Table 21.277 Type and value information for transaction_allow_batching

    表21.277交易允许批处理的类型和值信息

    Property Value
    Name transaction_allow_batching
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Session
    Dynamic Yes
    Type
    Default, Range FALSE (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Allows batching of statements within a transaction. Disable AUTOCOMMIT to use.

    描述:允许在事务中对语句进行批处理。禁用自动提交以使用。


    When set to 1 or ON, this variable enables batching of statements within the same transaction. To use this variable, autocommit must first be disabled by setting it to 0 or OFF; otherwise, setting transaction_allow_batching has no effect.

    当设置为1或ON时,此变量启用同一事务中语句的批处理。若要使用此变量,必须首先通过将其设置为0或关闭来禁用autocommit;否则,将transaction_allow_batching设置为无效。

    It is safe to use this variable with transactions that performs writes only, as having it enabled can lead to reads from the before image. You should ensure that any pending transactions are committed (using an explicit COMMIT if desired) before issuing a SELECT.

    对只执行写操作的事务使用此变量是安全的,因为启用此变量会导致读取“before”图像。您应该确保在发出select之前提交所有挂起的事务(如果需要,使用显式提交)。

    Important

    transaction_allow_batching should not be used whenever there is the possibility that the effects of a given statement depend on the outcome of a previous statement within the same transaction.

    如果给定语句的效果可能依赖于同一事务中以前语句的结果,则不应使用transaction_allow_批处理。

    This variable is currently supported for NDB Cluster only.

    此变量当前仅支持NDB群集。

The system variables in the following list all relate to the ndbinfo information database.

下表中的系统变量都与ndbinfo信息数据库相关。

  • ndbinfo_database

    ndbinfo_数据库

    Table 21.278 Type and value information for ndbinfo_database

    表21.278 ndbinfo_数据库的类型和值信息

    Property Value
    Name ndbinfo_database
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Global
    Dynamic No
    Type
    Default, Range ndbinfo (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: The name used for the NDB information database; read only

    描述:用于ndb信息数据库的名称;只读


    Shows the name used for the NDB information database; the default is ndbinfo. This is a read-only variable whose value is determined at compile time; you can set it by starting the server using --ndbinfo-database=name, which sets the value shown for this variable but does not actually change the name used for the NDB information database.

    显示用于ndb信息数据库的名称;默认为ndbinfo。这是一个只读变量,其值在编译时确定;您可以使用--ndbinfo database=name启动服务器来设置它,它设置为此变量显示的值,但实际上不会更改用于ndb信息数据库的名称。

  • ndbinfo_max_bytes

    最大字节数

    Table 21.279 Type and value information for ndbinfo_max_bytes

    表21.279ndbinfo_max_字节的类型和值信息

    Property Value
    Name ndbinfo_max_bytes
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File No
    Scope Both
    Dynamic Yes
    Type
    Default, Range 0 / - (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Used for debugging only

    description:仅用于调试


    Used in testing and debugging only.

    仅用于测试和调试。

  • ndbinfo_max_rows

    最大行数

    Table 21.280 Type and value information for ndbinfo_max_rows

    表21.280 ndbinfo_max_行的类型和值信息

    Property Value
    Name ndbinfo_max_rows
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File No
    Scope Both
    Dynamic Yes
    Type
    Default, Range 10 / - (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Used for debugging only

    description:仅用于调试


    Used in testing and debugging only.

    仅用于测试和调试。

  • ndbinfo_offline

    ndbinfo_离线

    Table 21.281 Type and value information for ndbinfo_offline

    表21.281离线ndbinfo的类型和值信息

    Property Value
    Name ndbinfo_offline
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Global
    Dynamic Yes
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Put the ndbinfo database into offline mode, in which no rows are returned from tables or views

    描述:将ndbinfo数据库置于脱机模式,在这种模式下,表或视图不返回行


    Place the ndbinfo database into offline mode, in which tables and views can be opened even when they do not actually exist, or when they exist but have different definitions in NDB. No rows are returned from such tables (or views).

    将NDBIFO数据库放置到脱机模式中,其中即使当它们实际上不存在时,也可以打开表和视图,或者当它们存在时,在NDB中具有不同的定义。不会从此类表(或视图)返回任何行。

  • ndbinfo_show_hidden

    隐藏的显示

    Table 21.282 Type and value information for ndbinfo_show_hidden

    表21.282 ndbinfo_show_hidden的类型和值信息

    Property Value
    Name ndbinfo_show_hidden
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File No
    Scope Both
    Dynamic Yes
    Type
    Default, Range OFF (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: Whether to show ndbinfo internal base tables in the mysql client. The default is OFF.

    description:是否在mysql客户端显示ndbinfo内部基表。默认设置为“关闭”。


    Whether or not the ndbinfo database's underlying internal tables are shown in the mysql client. The default is OFF.

    ndbinfo数据库的底层内部表是否显示在mysql客户机中。默认设置为“关闭”。

  • ndbinfo_table_prefix

    ndbinfo_table_前缀

    Table 21.283 Type and value information for ndbinfo_table_prefix

    表21.283 ndbinfo_table_前缀的类型和值信息

    Property Value
    Name ndbinfo_table_prefix
    Command Line Yes
    System Variable Yes
    Status Variable No
    Option File No
    Scope Both
    Dynamic Yes
    Type
    Default, Range ndb$ (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: The prefix to use for naming ndbinfo internal base tables

    描述:用于命名ndbinfo内部基表的前缀


    The prefix used in naming the ndbinfo database's base tables (normally hidden, unless exposed by setting ndbinfo_show_hidden). This is a read-only variable whose default value is ndb$. You can start the server with the --ndbinfo-table-prefix option, but this merely sets the variable and does not change the actual prefix used to name the hidden base tables; the prefix itself is determined at compile time.

    命名ndbinfo数据库基表时使用的前缀(通常隐藏,除非通过设置ndbinfo_show_hidden公开)。这是一个只读变量,默认值为ndb$。可以使用--ndbinfo table prefix选项启动服务器,但这只设置变量,不会更改用于命名隐藏基表的实际前缀;前缀本身在编译时确定。

  • ndbinfo_version

    ndbinfo_版本

    Table 21.284 Type and value information for ndbinfo_version

    表21.284 ndbinfo版本的类型和值信息

    Property Value
    Name ndbinfo_version
    Command Line No
    System Variable Yes
    Status Variable No
    Option File No
    Scope Global
    Dynamic No
    Type
    Default, Range (Version: NDB 7.5-7.6)
    Notes

    DESCRIPTION: The version of the ndbinfo engine; read only

    描述:ndbinfo引擎的版本;只读


    Shows the version of the ndbinfo engine in use; read-only.

    显示正在使用的ndbinfo引擎的版本;只读。

21.3.3.9.3 NDB Cluster Status Variables

This section provides detailed information about MySQL server status variables that relate to NDB Cluster and the NDB storage engine. For status variables not specific to NDB Cluster, and for general information on using status variables, see Section 5.1.9, “Server Status Variables”.

本节提供有关与ndb集群和ndb存储引擎相关的mysql服务器状态变量的详细信息。有关非特定于ndb集群的状态变量,以及有关使用状态变量的一般信息,请参阅第5.1.9节“服务器状态变量”。

  • Handler_discover

    处理程序发现

    The MySQL server can ask the NDBCLUSTER storage engine if it knows about a table with a given name. This is called discovery. Handler_discover indicates the number of times that tables have been discovered using this mechanism.

    mysql服务器可以询问ndbcluster存储引擎是否知道具有给定名称的表。这叫做发现。handler_discover指示使用此机制发现表的次数。

  • Ndb_api_bytes_sent_count_session

    ndb_api_bytes_sent_count_会话

    Amount of data (in bytes) sent to the data nodes in this client session.

    发送到此客户端会话中的数据节点的数据量(字节)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_bytes_sent_count_slave

    ndb_api_bytes_sent_count_从机

    Amount of data (in bytes) sent to the data nodes by this slave.

    此从机发送到数据节点的数据量(字节)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_bytes_sent_count

    ndb_api_bytes_sent_计数

    Amount of data (in bytes) sent to the data nodes by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)发送到数据节点的数据量(字节)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_bytes_received_count_session

    ndb_api_bytes_received_count_会话

    Amount of data (in bytes) received from the data nodes in this client session.

    在此客户端会话中从数据节点接收的数据量(字节)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_bytes_received_count_slave

    ndb_api_bytes_received_count_从机

    Amount of data (in bytes) received from the data nodes by this slave.

    此从机从数据节点接收的数据量(字节)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_bytes_received_count

    ndb_api_bytes_received_计数

    Amount of data (in bytes) received from the data nodes by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)从数据节点接收的数据量(字节)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_event_data_count_injector

    ndb_api_event_data_count_注入器

    The number of row change events received by the NDB binlog injector thread.

    ndb binlog注入程序线程接收的行更改事件数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_event_data_count

    ndb_api_event_data_计数

    The number of row change events received by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)接收的行更改事件数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_event_nondata_count_injector

    ndb_api_event_nondata_count_注入器

    The number of events received, other than row change events, by the NDB binary log injector thread.

    ndb二进制日志注入器线程接收的事件数(行更改事件除外)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_event_nondata_count

    ndb_api_event_nondata_计数

    The number of events received, other than row change events, by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)接收的事件数(行更改事件除外)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_event_bytes_count_injector

    ndb_api_event_bytes_count_注入器

    The number of bytes of events received by the NDB binlog injector thread.

    ndb binlog注入程序线程接收的事件字节数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_event_bytes_count

    ndb_api_event_bytes_计数

    The number of bytes of events received by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)接收的事件字节数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_pk_op_count_session

    ndb_api_pk_op_count_会话

    The number of operations in this client session based on or using primary keys. This includes operations on blob tables, implicit unlock operations, and auto-increment operations, as well as user-visible primary key operations.

    此客户端会话中基于主键或使用主键的操作数。这包括对blob表的操作、隐式解锁操作和自动增量操作,以及用户可见的主键操作。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_pk_op_count_slave

    ndb_api_pk_op_count_从机

    The number of operations by this slave based on or using primary keys. This includes operations on blob tables, implicit unlock operations, and auto-increment operations, as well as user-visible primary key operations.

    此从机基于主键或使用主键的操作数。这包括对blob表的操作、隐式解锁操作和自动增量操作,以及用户可见的主键操作。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_pk_op_count

    ndb_api_pk_op_计数

    The number of operations by this MySQL Server (SQL node) based on or using primary keys. This includes operations on blob tables, implicit unlock operations, and auto-increment operations, as well as user-visible primary key operations.

    此MySQL服务器(SQL节点)基于主键或使用主键的操作数。这包括对blob表的操作、隐式解锁操作和自动增量操作,以及用户可见的主键操作。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_pruned_scan_count_session

    ndb_api_pruned_scan_count_会话

    The number of scans in this client session that have been pruned to a single partition.

    此客户端会话中已修剪为单个分区的扫描数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_pruned_scan_count_slave

    ndb_api_pruned_scan_count_从机

    The number of scans by this slave that have been pruned to a single partition.

    此从机已修剪为单个分区的扫描数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_pruned_scan_count

    ndb_api_pruned_scan_计数

    The number of scans by this MySQL Server (SQL node) that have been pruned to a single partition.

    此MySQL服务器(SQL节点)已修剪为单个分区的扫描数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_range_scan_count_session

    ndb_api_range_scan_count_会话

    The number of range scans that have been started in this client session.

    在此客户端会话中启动的范围扫描数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_range_scan_count_slave

    ndb_api_range_scan_count_从机

    The number of range scans that have been started by this slave.

    此从设备启动的范围扫描数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_range_scan_count

    ndb_api_range_scan_计数

    The number of range scans that have been started by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)已启动的范围扫描数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_read_row_count_session

    ndb_api_read_row_count_会话

    The total number of rows that have been read in this client session. This includes all rows read by any primary key, unique key, or scan operation made in this client session.

    已在此客户端会话中读取的行总数。这包括在此客户端会话中由任何主键、唯一键或扫描操作读取的所有行。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_read_row_count_slave

    ndb_api_read_row_count_从机

    The total number of rows that have been read by this slave. This includes all rows read by any primary key, unique key, or scan operation made by this slave.

    此从机已读取的总行数。这包括由任何主键、唯一键或此从机执行的扫描操作读取的所有行。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_read_row_count

    ndb_api_read_row_计数

    The total number of rows that have been read by this MySQL Server (SQL node). This includes all rows read by any primary key, unique key, or scan operation made by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)已读取的行总数。这包括由该mysql服务器(sql节点)执行的任何主键、唯一键或扫描操作读取的所有行。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_scan_batch_count_session

    ndb_api_scan_batch_count_会话

    The number of batches of rows received in this client session. 1 batch is defined as 1 set of scan results from a single fragment.

    在此客户端会话中接收的批处理行数。1批定义为单个片段的1组扫描结果。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_scan_batch_count_slave

    ndb_api_scan_batch_count_从机

    The number of batches of rows received by this slave. 1 batch is defined as 1 set of scan results from a single fragment.

    此从属服务器接收的批处理行数。1批定义为单个片段的1组扫描结果。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_scan_batch_count

    ndb_api_scan_batch_计数

    The number of batches of rows received by this MySQL Server (SQL node). 1 batch is defined as 1 set of scan results from a single fragment.

    此MySQL服务器(SQL节点)接收的批处理行数。1批定义为单个片段的1组扫描结果。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_table_scan_count_session

    ndb_api_table_scan_count_会话

    The number of table scans that have been started in this client session, including scans of internal tables,.

    此客户端会话中已启动的表扫描数,包括内部表的扫描。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_table_scan_count_slave

    ndb_api_table_scan_count_从机

    The number of table scans that have been started by this slave, including scans of internal tables,.

    此从机启动的表扫描数,包括内部表的扫描。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_table_scan_count

    ndb_api_table_scan_计数

    The number of table scans that have been started by this MySQL Server (SQL node), including scans of internal tables,.

    此MySQL服务器(SQL节点)已启动的表扫描数,包括内部表的扫描。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_abort_count_session

    ndb_api_trans_abort_count_会话

    The number of transactions aborted in this client session.

    在此客户端会话中中止的事务数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_abort_count_slave

    ndb_api_trans_abort_count_从机

    The number of transactions aborted by this slave.

    此从机中止的事务数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_abort_count

    ndb_api_trans_abort_计数

    The number of transactions aborted by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)中止的事务数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_close_count_session

    ndb_api_trans_close_count_会话

    The number of transactions closed in this client session. This value may be greater than the sum of Ndb_api_trans_commit_count_session and Ndb_api_trans_abort_count_session, since some transactions may have been rolled back.

    在此客户端会话中关闭的事务数。此值可能大于ndb_api_trans_commit_count_会话和ndb_api_trans_abort_count_会话的总和,因为某些事务可能已回滚。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_close_count_slave

    ndb_api_trans_close_count_从机

    The number of transactions closed by this slave. This value may be greater than the sum of Ndb_api_trans_commit_count_slave and Ndb_api_trans_abort_count_slave, since some transactions may have been rolled back.

    此从属服务器关闭的事务数。此值可能大于ndb_api_trans_commit_slave和ndb_api_trans_abort_count_slave的总和,因为某些事务可能已回滚。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_close_count

    ndb_api_trans_close_计数

    The number of transactions closed by this MySQL Server (SQL node). This value may be greater than the sum of Ndb_api_trans_commit_count and Ndb_api_trans_abort_count, since some transactions may have been rolled back.

    此MySQL服务器(SQL节点)关闭的事务数。此值可能大于ndb_api_trans_commit_计数和ndb_api_trans_abort_计数之和,因为某些事务可能已回滚。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_commit_count_session

    ndb_api_trans_commit_count_会话

    The number of transactions committed in this client session.

    在此客户端会话中提交的事务数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_commit_count_slave

    ndb_api_trans_commit_count_从机

    The number of transactions committed by this slave.

    此从属服务器提交的事务数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_commit_count

    ndb_api_trans_commit_计数

    The number of transactions committed by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)提交的事务数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_local_read_row_count_session

    ndb_api_trans_local_read_row_count_会话

    The total number of rows that have been read in this client session. This includes all rows read by any primary key, unique key, or scan operation made in this client session.

    已在此客户端会话中读取的行总数。这包括在此客户端会话中由任何主键、唯一键或扫描操作读取的所有行。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_local_read_row_count_slave

    ndb_api_trans_local_read_row_count_从机

    The total number of rows that have been read by this slave. This includes all rows read by any primary key, unique key, or scan operation made by this slave.

    此从机已读取的总行数。这包括由任何主键、唯一键或此从机执行的扫描操作读取的所有行。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_local_read_row_count

    ndb_api_trans_local_read_行计数

    The total number of rows that have been read by this MySQL Server (SQL node). This includes all rows read by any primary key, unique key, or scan operation made by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)已读取的行总数。这包括由该mysql服务器(sql节点)执行的任何主键、唯一键或扫描操作读取的所有行。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_start_count_session

    ndb_api_trans_start_count_会话

    The number of transactions started in this client session.

    在此客户端会话中启动的事务数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_start_count_slave

    ndb_api_trans_start_count_从机

    The number of transactions started by this slave.

    此从机启动的事务数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_trans_start_count

    ndb_api_trans_start_计数

    The number of transactions started by this MySQL Server (SQL node).

    此MySQL服务器(SQL节点)启动的事务数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_uk_op_count_session

    ndb_api_uk_op_count_会话

    The number of operations in this client session based on or using unique keys.

    此客户端会话中基于或使用唯一密钥的操作数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_uk_op_count_slave

    ndb_api_uk_op_count_从机

    The number of operations by this slave based on or using unique keys.

    此从机基于或使用唯一键的操作数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_uk_op_count

    ndb_api_uk_op_计数

    The number of operations by this MySQL Server (SQL node) based on or using unique keys.

    此MySQL服务器(SQL节点)基于或使用唯一键的操作数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_exec_complete_count_session

    ndb_api_wait_exec_complete_count_会话

    The number of times a thread has been blocked in this client session while waiting for execution of an operation to complete. This includes all execute() calls as well as implicit executes for blob and auto-increment operations not visible to clients.

    等待操作执行完成时线程在此客户端会话中被阻止的次数。这包括所有execute()调用以及客户端看不到的blob和自动增量操作的隐式执行。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_exec_complete_count_slave

    ndb_api_wait_exec_complete_count_从机

    The number of times a thread has been blocked by this slave while waiting for execution of an operation to complete. This includes all execute() calls as well as implicit executes for blob and auto-increment operations not visible to clients.

    在等待操作执行完成时线程被此从机阻塞的次数。这包括所有execute()调用以及客户端看不到的blob和自动增量操作的隐式执行。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_exec_complete_count

    ndb_api_wait_exec_complete_计数

    The number of times a thread has been blocked by this MySQL Server (SQL node) while waiting for execution of an operation to complete. This includes all execute() calls as well as implicit executes for blob and auto-increment operations not visible to clients.

    此MySQL服务器(SQL节点)在等待操作执行完成时阻塞线程的次数。这包括所有execute()调用以及客户端看不到的blob和自动增量操作的隐式执行。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_meta_request_count_session

    ndb_api_wait_meta_request_count_会话

    The number of times a thread has been blocked in this client session waiting for a metadata-based signal, such as is expected for DDL requests, new epochs, and seizure of transaction records.

    线程在此客户端会话中被阻止等待基于元数据的信号的次数,如DDL请求、新的时间段和事务记录的获取所需的次数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_meta_request_count_slave

    ndb_api_wait_meta_request_count_从机

    The number of times a thread has been blocked by this slave waiting for a metadata-based signal, such as is expected for DDL requests, new epochs, and seizure of transaction records.

    线程被此从机阻塞的次数,等待基于元数据的信号,如ddl请求、新的epoch和事务记录的获取。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_meta_request_count

    ndb_api_wait_meta_request_计数

    The number of times a thread has been blocked by this MySQL Server (SQL node) waiting for a metadata-based signal, such as is expected for DDL requests, new epochs, and seizure of transaction records.

    此MySQL服务器(SQL节点)已阻止线程等待基于元数据的信号的次数,如DDL请求、新纪元和获取事务记录所需的次数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_nanos_count_session

    ndb_api_wait_nanos_count_会话

    Total time (in nanoseconds) spent in this client session waiting for any type of signal from the data nodes.

    在此客户端会话中等待来自数据节点的任何类型信号的总时间(以纳秒为单位)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_nanos_count_slave

    ndb_api_wait_nanos_count_从机

    Total time (in nanoseconds) spent by this slave waiting for any type of signal from the data nodes.

    此从机等待来自数据节点的任何类型信号所用的总时间(以纳秒为单位)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_nanos_count

    ndb_api_wait_nanos_计数

    Total time (in nanoseconds) spent by this MySQL Server (SQL node) waiting for any type of signal from the data nodes.

    此MySQL服务器(SQL节点)等待来自数据节点的任何类型信号所用的总时间(以纳秒为单位)。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_scan_result_count_session

    ndb_api_wait_scan_result_count_会话

    The number of times a thread has been blocked in this client session while waiting for a scan-based signal, such as when waiting for more results from a scan, or when waiting for a scan to close.

    等待基于扫描的信号(如等待扫描的更多结果或等待扫描关闭)时,线程在此客户端会话中被阻止的次数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it relates to the current session only, and is not affected by any other clients of this mysqld.

    尽管可以使用show global status或show session status读取此变量,但它只与当前会话相关,不受此mysqld的任何其他客户端的影响。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_scan_result_count_slave

    ndb_api_wait_scan_result_count_从机

    The number of times a thread has been blocked by this slave while waiting for a scan-based signal, such as when waiting for more results from a scan, or when waiting for a scan to close.

    在等待基于扫描的信号(如等待扫描的更多结果或等待扫描关闭)时,线程被此从机阻塞的次数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope. If this MySQL server does not act as a replication slave, or does not use NDB tables, this value is always 0.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。如果此mysql服务器不充当复制从属服务器,或不使用ndb表,则此值始终为0。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_api_wait_scan_result_count

    ndb_api_wait_scan_result_计数

    The number of times a thread has been blocked by this MySQL Server (SQL node) while waiting for a scan-based signal, such as when waiting for more results from a scan, or when waiting for a scan to close.

    此MySQL服务器(SQL节点)在等待基于扫描的信号(如等待扫描的更多结果或等待扫描关闭)时阻塞线程的次数。

    Although this variable can be read using either SHOW GLOBAL STATUS or SHOW SESSION STATUS, it is effectively global in scope.

    尽管可以使用show global status或show session status读取此变量,但它在范围内实际上是全局的。

    For more information, see Section 21.5.17, “NDB API Statistics Counters and Variables”.

    有关更多信息,请参阅第21.5.17节“ndb api统计计数器和变量”。

  • Ndb_cluster_node_id

    ndb_群集节点id

    If the server is acting as an NDB Cluster node, then the value of this variable its node ID in the cluster.

    如果服务器充当ndb集群节点,则此变量的值为集群中的节点id。

    If the server is not part of an NDB Cluster, then the value of this variable is 0.

    如果服务器不是ndb集群的一部分,则此变量的值为0。

  • Ndb_config_from_host

    来自主机的ndb配置

    If the server is part of an NDB Cluster, the value of this variable is the host name or IP address of the Cluster management server from which it gets its configuration data.

    如果服务器是ndb群集的一部分,则此变量的值是从中获取配置数据的群集管理服务器的主机名或ip地址。

    If the server is not part of an NDB Cluster, then the value of this variable is an empty string.

    如果服务器不是ndb集群的一部分,则此变量的值为空字符串。

  • Ndb_config_from_port

    从端口配置

    If the server is part of an NDB Cluster, the value of this variable is the number of the port through which it is connected to the Cluster management server from which it gets its configuration data.

    如果服务器是ndb集群的一部分,则此变量的值是连接到从中获取配置数据的集群管理服务器的端口号。

    If the server is not part of an NDB Cluster, then the value of this variable is 0.

    如果服务器不是ndb集群的一部分,则此变量的值为0。

  • Ndb_conflict_fn_max_del_win

    ndb_冲突fn_max_del_win

    Shows the number of times that a row was rejected on the current SQL node due to NDB Cluster Replication conflict resolution using NDB$MAX_DELETE_WIN(), since the last time that this mysqld was started.

    显示自上次启动此mysqld以来,由于使用ndb$max_delete_win()解决ndb群集复制冲突而在当前sql节点上拒绝行的次数。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_fn_max

    最大ndb_冲突

    Used in NDB Cluster Replication conflict resolution, this variable shows the number of times that a row was not applied on the current SQL node due to greatest timestamp wins conflict resolution since the last time that this mysqld was started.

    在ndb集群复制冲突解决中使用,此变量显示自上次启动此mysqld以来,由于“最大时间戳赢”冲突解决而未在当前sql节点上应用行的次数。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_fn_old

    新老冲突

    Used in NDB Cluster Replication conflict resolution, this variable shows the number of times that a row was not applied as the result of same timestamp wins conflict resolution on a given mysqld since the last time it was restarted.

    在ndb cluster replication conflict resolution中使用,此变量显示自上次重新启动给定mysqld以来,由于“相同时间戳赢得”冲突解决而未对其应用行的次数。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_fn_epoch

    新纪元冲突

    Used in NDB Cluster Replication conflict resolution, this variable shows the number of rows found to be in conflict using NDB$EPOCH() conflict resolution on a given mysqld since the last time it was restarted.

    此变量用于ndb cluster replication conflict resolution,它显示自上次重新启动给定mysqld以来,在给定mysqld上使用ndb$epoch()冲突解决方法找到的冲突行数。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_fn_epoch2

    ndb_冲突fn_epoch2

    Shows the number of rows found to be in conflict in NDB Cluster Replication conflict resolution, when using NDB$EPOCH2(), on the master designated as the primary since the last time it was restarted.

    显示在使用ndb$epoch2()时,自上次重新启动后指定为主节点的主节点上的ndb群集复制冲突解决中发现冲突的行数。

    For more information, see NDB$EPOCH2().

    有关详细信息,请参见ndb$epoch2()。

  • Ndb_conflict_fn_epoch_trans

    新纪元时期的冲突

    Used in NDB Cluster Replication conflict resolution, this variable shows the number of rows found to be in conflict using NDB$EPOCH_TRANS() conflict resolution on a given mysqld since the last time it was restarted.

    此变量用于ndb cluster replication conflict resolution,它显示自上次重新启动给定mysqld以来,在给定mysqld上使用ndb$epoch_trans()冲突解决方法找到的冲突行数。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_fn_epoch2_trans

    ndb_冲突fn_epoch2_trans

    Used in NDB Cluster Replication conflict resolution, this variable shows the number of rows found to be in conflict using NDB$EPOCH_TRANS2() conflict resolution on a given mysqld since the last time it was restarted.

    此变量用于ndb cluster replication conflict resolution,它显示自上次重新启动给定mysqld以来,在给定mysqld上使用ndb$epoch_trans2()冲突解决方法找到的冲突行数。

    For more information, see NDB$EPOCH2_TRANS().

    有关详细信息,请参见ndb$epoch2_trans()。

  • Ndb_conflict_last_conflict_epoch

    最后一次冲突

    The most recent epoch in which a conflict was detected on this slave. You can compare this value with Ndb_slave_max_replicated_epoch; if Ndb_slave_max_replicated_epoch is greater than Ndb_conflict_last_conflict_epoch, no conflicts have yet been detected.

    在这个奴隶身上检测到冲突的最新纪元。您可以将此值与ndb_slave_max_replicated_epoch进行比较;如果ndb_slave_max_replicated_epoch大于ndb_conflict_last_conflict_epoch,则尚未检测到冲突。

    See Section 21.6.11, “NDB Cluster Replication Conflict Resolution”, for more information.

    有关详细信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_reflected_op_discard_count

    ndb_冲突反映了丢弃计数

    When using NDB Cluster Replication conflict resolution, this is the number of reflected operations that were not applied on the secondary, due to encountering an error during execution.

    使用ndb cluster replication conflict resolution时,这是由于在执行过程中遇到错误而未在辅助服务器上应用的反射操作数。

    See Section 21.6.11, “NDB Cluster Replication Conflict Resolution”, for more information.

    有关详细信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_reflected_op_prepare_count

    ndb_冲突反映了

    When using conflict resolution with NDB Cluster Replication, this status variable contains the number of reflected operations that have been defined (that is, prepared for execution on the secondary).

    对ndb群集复制使用冲突解决时,此状态变量包含已定义的反射操作数(即准备在辅助上执行的操作数)。

    See Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    见21.6.11节,“NDB群集复制冲突解决”。

  • Ndb_conflict_refresh_op_count

    ndb_冲突刷新操作计数

    When using conflict resolution with NDB Cluster Replication, this gives the number of refresh operations that have been prepared for execution on the secondary.

    在对ndb群集复制使用冲突解决时,这将给出准备在辅助服务器上执行的刷新操作数。

    See Section 21.6.11, “NDB Cluster Replication Conflict Resolution”, for more information.

    有关详细信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_last_stable_epoch

    最后稳定时期的冲突

    Number of rows found to be in conflict by a transactional conflict function

    事务冲突函数发现有冲突的行数

    See Section 21.6.11, “NDB Cluster Replication Conflict Resolution”, for more information.

    有关详细信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_trans_row_conflict_count

    ndb_冲突跨行冲突计数

    Used in NDB Cluster Replication conflict resolution, this status variable shows the number of rows found to be directly in-conflict by a transactional conflict function on a given mysqld since the last time it was restarted.

    在ndb cluster replication conflict resolution中使用,此状态变量显示自上次重新启动给定mysqld后,该mysqld上的事务冲突函数发现直接发生冲突的行数。

    Currently, the only transactional conflict detection function supported by NDB Cluster is NDB$EPOCH_TRANS(), so this status variable is effectively the same as Ndb_conflict_fn_epoch_trans.

    目前,ndb集群唯一支持的事务冲突检测功能是ndb$epoch_trans(),因此该状态变量实际上与ndb_conflict_fn_epoch_trans相同。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_trans_row_reject_count

    ndb_conflict_trans_row_reject_计数

    Used in NDB Cluster Replication conflict resolution, this status variable shows the total number of rows realigned due to being determined as conflicting by a transactional conflict detection function. This includes not only Ndb_conflict_trans_row_conflict_count, but any rows in or dependent on conflicting transactions.

    在ndb集群复制冲突解决中使用,此状态变量显示由于被事务冲突检测函数确定为冲突而重新调整的行总数。这不仅包括ndb_conflict_trans_row_conflict_count,还包括冲突事务中或依赖于冲突事务的任何行。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_trans_reject_count

    ndb_conflict_trans_reject_计数

    Used in NDB Cluster Replication conflict resolution, this status variable shows the number of transactions found to be in conflict by a transactional conflict detection function.

    在ndb集群复制冲突解决中使用,此状态变量显示事务冲突检测函数发现冲突的事务数。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_trans_detect_iter_count

    ndb_conflict_trans_detect_iter_计数

    Used in NDB Cluster Replication conflict resolution, this shows the number of internal iterations required to commit an epoch transaction. Should be (slightly) greater than or equal to Ndb_conflict_trans_conflict_commit_count.

    用于ndb集群复制冲突解决,显示提交epoch事务所需的内部迭代次数。应(略)大于或等于ndb_conflict_trans_conflict_commit_count。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_conflict_trans_conflict_commit_count

    ndb_conflict_trans_conflict_commit_计数

    Used in NDB Cluster Replication conflict resolution, this shows the number of epoch transactions committed after they required transactional conflict handling.

    在ndb集群复制冲突解决中使用,它显示在需要事务冲突处理之后提交的epoch事务数。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_epoch_delete_delete_count

    ndb_epoch_delete_delete_计数

    When using delete-delete conflict detection, this is the number of delete-delete conflicts detected, where a delete operation is applied, but the indicated row does not exist.

    当使用删除删除冲突检测时,这是检测到删除删除冲突的次数,其中应用了删除操作,但所指示的行不存在。

  • Ndb_execute_count

    ndb_执行计数

    Provides the number of round trips to the NDB kernel made by operations.

    提供操作到ndb内核的往返次数。

  • Ndb_last_commit_epoch_server

    ndb_last_commit_epoch_服务器

    The epoch most recently committed by NDB.

    新纪元最近由国家开发银行确定的纪元

  • Ndb_last_commit_epoch_session

    ndb_last_commit_epoch_会话

    The epoch most recently committed by this NDB client.

    此ndb客户端最近提交的epoch。

  • Ndb_number_of_data_nodes

    数据节点的ndb_数量

    If the server is part of an NDB Cluster, the value of this variable is the number of data nodes in the cluster.

    如果服务器是ndb集群的一部分,则此变量的值是集群中的数据节点数。

    If the server is not part of an NDB Cluster, then the value of this variable is 0.

    如果服务器不是ndb集群的一部分,则此变量的值为0。

  • Ndb_pushed_queries_defined

    定义了ndb_pushed_查询

    The total number of joins pushed down to the NDB kernel for distributed handling on the data nodes.

    下推到ndb内核以便在数据节点上进行分布式处理的连接总数。

    Note

    Joins tested using EXPLAIN that can be pushed down contribute to this number.

    使用可以向下推的explain测试的连接会产生这个数字。

  • Ndb_pushed_queries_dropped

    已删除ndb_pushed_查询

    The number of joins that were pushed down to the NDB kernel but that could not be handled there.

    已下推到ndb内核但在那里无法处理的联接数。

  • Ndb_pushed_queries_executed

    已执行ndb_pushed_查询

    The number of joins successfully pushed down to NDB and executed there.

    成功下推到ndb并在那里执行的联接数。

  • Ndb_pushed_reads

    ndb_pushed_读取

    The number of rows returned to mysqld from the NDB kernel by joins that were pushed down.

    通过下推连接从ndb内核返回mysqld的行数。

    Note

    Executing EXPLAIN on joins that can be pushed down to NDB does not add to this number.

    对可以下推到ndb的联接执行explain不会添加到此数字。

  • Ndb_pruned_scan_count

    ndb_修剪的扫描计数

    This variable holds a count of the number of scans executed by NDBCLUSTER since the NDB Cluster was last started where NDBCLUSTER was able to use partition pruning.

    此变量保存自上次启动ndb群集(其中ndb cluster可以使用分区修剪)以来ndbcluster执行的扫描数。

    Using this variable together with Ndb_scan_count can be helpful in schema design to maximize the ability of the server to prune scans to a single table partition, thereby involving only a single data node.

    将此变量与NdByScSnLyCalk一起使用可以帮助模式设计最大化服务器将扫描剪裁到单个表分区的能力,从而只涉及单个数据节点。

  • Ndb_scan_count

    ndb_扫描计数

    This variable holds a count of the total number of scans executed by NDBCLUSTER since the NDB Cluster was last started.

    此变量保存自上次启动ndb群集以来ndb cluster执行的扫描总数。

  • Ndb_slave_max_replicated_epoch

    ndb_slave_max_replicated_期

    The most recently committed epoch on this slave. You can compare this value with Ndb_conflict_last_conflict_epoch; if Ndb_slave_max_replicated_epoch is the greater of the two, no conflicts have yet been detected.

    关于这个奴隶的最新纪元。您可以将此值与ndb_conflict_last_conflict_epoch进行比较;如果ndb_slave_max_replicated_epoch大于两者,则尚未检测到冲突。

    For more information, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

    有关更多信息,请参阅21.6.11节“NDB群集复制冲突解决”。

  • Ndb_system_name

    ndb_系统名称

    If this MySQL Server is connected to an NDB cluster, this read-only variable shows the cluster system name. Otherwise, the value is an empty string.

    如果此mysql服务器连接到ndb集群,则此只读变量显示集群系统名称。否则,该值为空字符串。

21.3.3.10 NDB Cluster TCP/IP Connections

TCP/IP is the default transport mechanism for all connections between nodes in an NDB Cluster. Normally it is not necessary to define TCP/IP connections; NDB Cluster automatically sets up such connections for all data nodes, management nodes, and SQL or API nodes.

tcp/ip是ndb集群中节点之间所有连接的默认传输机制。通常不需要定义tcp/ip连接;ndb集群会自动为所有数据节点、管理节点以及sql或api节点设置此类连接。

Note

For an exception to this rule, see Section 21.3.3.11, “NDB Cluster TCP/IP Connections Using Direct Connections”.

有关此规则的例外情况,请参阅第21.3.3.11节“使用直接连接的ndb群集tcp/ip连接”。

To override the default connection parameters, it is necessary to define a connection using one or more [tcp] sections in the config.ini file. Each [tcp] section explicitly defines a TCP/IP connection between two NDB Cluster nodes, and must contain at a minimum the parameters NodeId1 and NodeId2, as well as any connection parameters to override.

要覆盖默认连接参数,必须使用config.ini文件中的一个或多个[TCP]节定义连接。每个[TCP]节显式定义两个NDB群集节点之间的TCP/IP连接,并且必须至少包含nodeid1和nodeid2参数以及要覆盖的任何连接参数。

It is also possible to change the default values for these parameters by setting them in the [tcp default] section.

也可以通过在[TCP default]部分中设置这些参数来更改它们的默认值。

Important

Any [tcp] sections in the config.ini file should be listed last, following all other sections in the file. However, this is not required for a [tcp default] section. This requirement is a known issue with the way in which the config.ini file is read by the NDB Cluster management server.

config.ini文件中的任何[TCP]节都应排在最后,紧跟文件中的所有其他节。但是,[tcp default]节不需要这样做。此要求是ndb群集管理服务器读取config.ini文件的方式的已知问题。

Connection parameters which can be set in [tcp] and [tcp default] sections of the config.ini file are listed here:

可以在config.ini文件的[tcp]和[tcp default]部分中设置的连接参数如下:

Restart types.  Information about the restart types used by the parameter descriptions in this section is shown in the following table:

重新启动类型。下表显示了有关本节中参数说明使用的重新启动类型的信息:

Table 21.285 NDB Cluster restart types

表21.285 ndb集群重启类型

Symbol Restart Type Description
N Node The parameter can be updated using a rolling restart (see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”)
S System All cluster nodes must be shut down completely, then restarted, to effect a change in this parameter
I Initial Data nodes must be restarted using the --initial option

  • NodeId1

    节点1

    Table 21.286 This table provides type and value information for the NodeId1 TCP configuration parameter

    表21.286此表提供了nodeid1 tcp配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default [none]
    Range 1 - 255
    Restart Type N

    To identify a connection between two nodes it is necessary to provide their node IDs in the [tcp] section of the configuration file as the values of NodeId1 and NodeId2. These are the same unique Id values for each of these nodes as described in Section 21.3.3.7, “Defining SQL and Other API Nodes in an NDB Cluster”.

    要标识两个节点之间的连接,必须在配置文件的[TCP]部分提供它们的节点ID作为nodeid1和nodeid2的值。对于这些节点中的每一个,它们都是相同的唯一id值,如21.3.3.7节“在ndb集群中定义sql和其他api节点”所述。

  • NodeId2

    野田2

    Table 21.287 This table provides type and value information for the NodeId2 TCP configuration parameter

    表21.287此表提供了nodeid2 tcp配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default [none]
    Range 1 - 255
    Restart Type N

    To identify a connection between two nodes it is necessary to provide their node IDs in the [tcp] section of the configuration file as the values of NodeId1 and NodeId2. These are the same unique Id values for each of these nodes as described in Section 21.3.3.7, “Defining SQL and Other API Nodes in an NDB Cluster”.

    要标识两个节点之间的连接,必须在配置文件的[TCP]部分提供它们的节点ID作为nodeid1和nodeid2的值。对于这些节点中的每一个,它们都是相同的唯一id值,如21.3.3.7节“在ndb集群中定义sql和其他api节点”所述。

  • HostName1

    主机名1

    Table 21.288 This table provides type and value information for the HostName1 TCP configuration parameter

    表21.288此表提供hostname1 tcp配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name or IP address
    Default [none]
    Range ...
    Restart Type N

    The HostName1 and HostName2 parameters can be used to specify specific network interfaces to be used for a given TCP connection between two nodes. The values used for these parameters can be host names or IP addresses.

    hostname1和hostname2参数可用于指定用于两个节点之间的给定tcp连接的特定网络接口。用于这些参数的值可以是主机名或IP地址。

  • HostName2

    主机名2

    Table 21.289 This table provides type and value information for the HostName1 TCP configuration parameter

    表21.289此表提供hostname1 tcp配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name or IP address
    Default [none]
    Range ...
    Restart Type N

    The HostName1 and HostName2 parameters can be used to specify specific network interfaces to be used for a given TCP connection between two nodes. The values used for these parameters can be host names or IP addresses.

    hostname1和hostname2参数可用于指定用于两个节点之间的给定tcp连接的特定网络接口。用于这些参数的值可以是主机名或IP地址。

  • OverloadLimit

    超载限制

    Table 21.290 This table provides type and value information for the OverloadLimit TCP configuration parameter

    表21.290此表提供重载限制TCP配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    When more than this many unsent bytes are in the send buffer, the connection is considered overloaded.

    如果发送缓冲区中有这么多未发送字节,则认为连接已过载。

    This parameter can be used to determine the amount of unsent data that must be present in the send buffer before the connection is considered overloaded. See Section 21.3.3.13, “Configuring NDB Cluster Send Buffer Parameters”, for more information.

    此参数可用于确定在连接被认为过载之前,发送缓冲区中必须存在的未发送数据量。有关更多信息,请参阅第21.3.3.13节“配置ndb集群发送缓冲区参数”。

  • SendBufferMemory

    发送缓冲存储器

    Table 21.291 This table provides type and value information for the SendBufferMemory TCP configuration parameter

    表21.291此表提供SendBufferMemory TCP配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 2M
    Range 256K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    TCP transporters use a buffer to store all messages before performing the send call to the operating system. When this buffer reaches 64KB its contents are sent; these are also sent when a round of messages have been executed. To handle temporary overload situations it is also possible to define a bigger send buffer.

    TCP传输程序在执行对操作系统的发送调用之前使用缓冲区存储所有消息。当这个缓冲区达到64KB时,它的内容被发送;当执行一轮消息时,这些内容也被发送。为了处理临时过载情况,还可以定义一个更大的发送缓冲区。

    If this parameter is set explicitly, then the memory is not dedicated to each transporter; instead, the value used denotes the hard limit for how much memory (out of the total available memory—that is, TotalSendBufferMemory) that may be used by a single transporter. For more information about configuring dynamic transporter send buffer memory allocation in NDB Cluster, see Section 21.3.3.13, “Configuring NDB Cluster Send Buffer Parameters”.

    如果显式设置了此参数,则内存不会专用于每个传输程序;相反,使用的值表示单个传输程序可以使用的内存(即总可用内存中的TotalSendBufferMemory)的硬限制。有关在ndb集群中配置动态传输程序发送缓冲区内存分配的更多信息,请参阅第21.3.3.13节“配置ndb集群发送缓冲区参数”。

    The default size of the send buffer is 2MB, which is the size recommended in most situations. The minimum size is 64 KB; the theoretical maximum is 4 GB.

    发送缓冲区的默认大小为2mb,这是大多数情况下建议的大小。最小尺寸为64 kb,理论最大值为4 GB。

  • SendSignalId

    发送信号ID

    Table 21.292 This table provides type and value information for the SendSignalId TCP configuration parameter

    表21.292此表提供sendSignalID TCP配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default [see text]
    Range true, false
    Restart Type N

    To be able to retrace a distributed message datagram, it is necessary to identify each message. When this parameter is set to Y, message IDs are transported over the network. This feature is disabled by default in production builds, and enabled in -debug builds.

    为了能够回溯分布式消息数据报,有必要识别每个消息。当此参数设置为Y时,消息ID将通过网络传输。默认情况下,此功能在生产版本中禁用,在调试版本中启用。

  • Checksum

    校验和

    Table 21.293 This table provides type and value information for the Checksum TCP configuration parameter

    表21.293此表提供校验和TCP配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default false
    Range true, false
    Restart Type N

    This parameter is a boolean parameter (enabled by setting it to Y or 1, disabled by setting it to N or 0). It is disabled by default. When it is enabled, checksums for all messages are calculated before they placed in the send buffer. This feature ensures that messages are not corrupted while waiting in the send buffer, or by the transport mechanism.

    此参数是布尔型参数(通过将其设置为y或1启用,通过将其设置为n或0禁用)。默认情况下禁用。启用时,在将所有消息放入发送缓冲区之前计算它们的校验和。此功能可确保在发送缓冲区或传输机制中等待时消息不会损坏。

  • PortNumber (OBSOLETE)

    端口号(过时)

    This parameter formerly specified the port number to be used for listening for connections from other nodes. It is now deprecated (and removed in NDB Cluster 7.5); use the ServerPort data node configuration parameter for this purpose instead (Bug #77405, Bug #21280456).

    此参数以前指定用于侦听来自其他节点的连接的端口号。现在已弃用(并在ndb cluster 7.5中删除);为此,请使用serverport data node配置参数(错误77405,错误21280456)。

  • PreSendChecksum

    前置校验和

    Table 21.294 This table provides type and value information for the PreSendChecksum TCP configuration parameter

    表21.294此表提供PresendChecksum TCP配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.6
    Type or units boolean
    Default false
    Range true, false
    Restart Type S

    If this parameter and Checksum are both enabled, perform pre-send checksum checks, and check all TCP signals between nodes for errors. Has no effect if Checksum is not also enabled.

    如果此参数和校验和都已启用,请执行发送前校验和检查,并检查节点之间的所有TCP信号是否存在错误。如果未启用校验和,则无效。

  • ReceiveBufferMemory

    接收缓冲存储器

    Table 21.295 This table provides type and value information for the ReceiveBufferMemory TCP configuration parameter

    表21.295此表提供ReceiveBufferMemory TCP配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 2M
    Range 16K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Specifies the size of the buffer used when receiving data from the TCP/IP socket.

    指定从TCP/IP套接字接收数据时使用的缓冲区大小。

    The default value of this parameter is 2MB. The minimum possible value is 16KB; the theoretical maximum is 4GB.

    此参数的默认值为2mb。最小可能值为16kb,理论最大值为4GB。

  • TCP_RCV_BUF_SIZE

    tcp_rcv_buf_大小

    Table 21.296 This table provides type and value information for the TCP_RCV_BUF_SIZE TCP configuration parameter

    表21.296此表提供TCP RCV BUF大小TCP配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 0 - 2G
    Restart Type N

    Determines the size of the receive buffer set during TCP transporter initialization. The default and minimum value is 0, which allows the operating system or platform to set this value. The default is recommended for most common usage cases.

    确定TCP传输程序初始化期间接收缓冲区集的大小。默认值和最小值为0,允许操作系统或平台设置此值。对于大多数常见的使用情况,建议使用默认值。

  • TCP_SND_BUF_SIZE

    tcp_snd_buf_大小

    Table 21.297 This table provides type and value information for the TCP_SND_BUF_SIZE TCP configuration parameter

    表21.297此表提供TCP_snd_buf_size TCP配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 0 - 2G
    Restart Type N

    Determines the size of the send buffer set during TCP transporter initialization. The default and minimum value is 0, which allows the operating system or platform to set this value. The default is recommended for most common usage cases.

    确定TCP传输程序初始化期间发送缓冲区集的大小。默认值和最小值为0,允许操作系统或平台设置此值。对于大多数常见的使用情况,建议使用默认值。

  • TCP_MAXSEG_SIZE

    TCP_MaxSeg_大小

    Table 21.298 This table provides type and value information for the TCP_MAXSEG_SIZE TCP configuration parameter

    表21.298此表提供TCP MaxSeg_Size TCP配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 0 - 2G
    Restart Type N

    Determines the size of the memory set during TCP transporter initialization. The default is recommended for most common usage cases.

    确定TCP传输程序初始化期间内存集的大小。对于大多数常见的使用情况,建议使用默认值。

  • TcpBind_INADDR_ANY

    在任何地方

    Setting this parameter to TRUE or 1 binds IP_ADDR_ANY so that connections can be made from anywhere (for autogenerated connections). The default is FALSE (0).

    将此参数设置为true或1绑定IP地址,以便可以从任何位置建立连接(对于自动生成的连接)。默认值为false(0)。

  • Group

    集团

    When ndb_optimized_node_selection is enabled, node proximity is used in some cases to select which node to connect to. This parameter can be used to influence proximity by setting it to a lower value, which is interpreted as closer. See the description of the system variable for more information.

    当启用NDPBUpLosiZeD.NoDESQL选择时,在某些情况下使用节点邻近性来选择要连接的节点。这个参数可以通过将其设置为较低的值来影响邻近度,该值被解释为“更接近”。有关详细信息,请参见系统变量的说明。

21.3.3.11 NDB Cluster TCP/IP Connections Using Direct Connections

Setting up a cluster using direct connections between data nodes requires specifying explicitly the crossover IP addresses of the data nodes so connected in the [tcp] section of the cluster config.ini file.

使用数据节点之间的直接连接设置集群需要在cluster config.ini文件的[tcp]部分显式指定所连接的数据节点的交叉IP地址。

In the following example, we envision a cluster with at least four hosts, one each for a management server, an SQL node, and two data nodes. The cluster as a whole resides on the 172.23.72.* subnet of a LAN. In addition to the usual network connections, the two data nodes are connected directly using a standard crossover cable, and communicate with one another directly using IP addresses in the 1.1.0.* address range as shown:

在下面的示例中,我们设想一个集群至少有四个主机,每个主机一个用于管理服务器、一个SQL节点和两个数据节点。整个集群位于局域网的172.23.72.*子网中。除了通常的网络连接外,两个数据节点还使用标准交叉电缆直接连接,并使用1.1.0.*地址范围中的IP地址直接相互通信,如图所示:

# Management Server
[ndb_mgmd]
Id=1
HostName=172.23.72.20

# SQL Node
[mysqld]
Id=2
HostName=172.23.72.21

# Data Nodes
[ndbd]
Id=3
HostName=172.23.72.22

[ndbd]
Id=4
HostName=172.23.72.23

# TCP/IP Connections
[tcp]
NodeId1=3
NodeId2=4
HostName1=1.1.0.1
HostName2=1.1.0.2

The HostName1 and HostName2 parameters are used only when specifying direct connections.

hostname1和hostname2参数仅在指定直接连接时使用。

The use of direct TCP connections between data nodes can improve the cluster's overall efficiency by enabling the data nodes to bypass an Ethernet device such as a switch, hub, or router, thus cutting down on the cluster's latency.

在数据节点之间使用直接tcp连接可以通过使数据节点绕过以太网设备(如交换机、集线器或路由器)来提高集群的整体效率,从而减少集群的延迟。

Note

To take the best advantage of direct connections in this fashion with more than two data nodes, you must have a direct connection between each data node and every other data node in the same node group.

要以这种方式充分利用具有两个以上数据节点的直接连接,必须在同一节点组中的每个数据节点和每个其他数据节点之间建立直接连接。

21.3.3.12 NDB Cluster Shared Memory Connections

Communications between NDB cluster nodes are normally handled using TCP/IP. The shared memory (SHM) transporter is distinguished by the fact that signals are transmitted by writing in memory rather than on a socket. The shared-memory transporter (SHM) can improve performance by negating up to 20% of the overhead required by a TCP connection when running an API node (usually an SQL node) and a data node together on the same host. NDB Cluster attempts to use the shared memory transporter and configure it automatically between data nodes and API nodes on the same host. In NDB 7.6.6 and later, you can enable a shared memory connection explicitly, by setting the UseShm data node configuration parameter to 1. When explicitly defining shared memory as the connection method, it is also necessary to set HostName for the data node and HostName for the API node, and that these be the same. It is also possible to employ multiple SHM connections in the same NDB cluster, on different hosts, each having one API node and one data node; see later in this section for an exaample of how to do this.

ndb集群节点之间的通信通常使用tcp/ip进行处理。共享内存(SHM)传输程序的区别在于,信号是通过在内存中写入而不是在套接字上传输的。共享内存传输程序(shared memory transporter,shm)在同一台主机上同时运行api节点(通常是sql节点)和数据节点时,可以抵消tcp连接所需的高达20%的开销,从而提高性能。ndb集群试图使用共享内存传输程序,并在同一主机上的数据节点和api节点之间自动配置它。在ndb 7.6.6及更高版本中,可以通过将useshm data node配置参数设置为1显式启用共享内存连接。当显式地将共享内存定义为连接方法时,还需要为数据节点设置主机名,并为api节点设置主机名,并且它们是相同的。也可以在同一个ndb集群中的不同主机上使用多个shm连接,每个都有一个api节点和一个数据节点;有关如何做到这一点的示例,请参阅本节后面的内容。

Suppose a cluster is running a data node which has node ID 1 and an SQL node having node ID 51 on the same host computer at 10.0.0.1. To enable an SHM connection between these two nodes, all that is necessary is to insure that the following entries are included in the cluster configuration file:

假设集群在10.0.0.1的同一台主机上运行一个节点ID为1的数据节点和一个节点ID为51的SQL节点。要在这两个节点之间启用SHM连接,只需确保群集配置文件中包含以下条目:

[ndbd]
NodeId=1
HostName=10.0.0.1
UseShm=1

[mysqld]
NodeId=51
HostName=10.0.0.1
Important

The two entries just shown are in addition to any other entries and parameter settings needed by the cluster. A more comlete example is shown later in this section.

刚才显示的两个条目是集群所需的任何其他条目和参数设置的补充。本节后面将显示一个更完整的示例。

Before starting data nodes that use SHM connections, it is also necessary to make sure that the operating system on each computer hosting such a data node has sufficient memory allocated to shared memory segments. See the documentation for your operating platform for information regarding this. In setups where multiple hosts are each running a data node and an API node, it is possible to enable shared memory on all such hosts by setting UseShm in the [ndbd default] section of the configuration file. This is shown in the example later in this section.

在启动使用shm连接的数据节点之前,还必须确保托管此类数据节点的每台计算机上的操作系统都有足够的内存分配给共享内存段。有关此信息,请参阅操作平台的文档。在多个主机分别运行一个数据节点和一个api节点的设置中,可以通过在配置文件的[ndbd default]部分中设置useshm在所有这些主机上启用共享内存。这将在本节后面的示例中显示。

While not strictly required, tuning for all SHM connections in the cluster can be done by setting one or more of the following parameters in the [shm default] section of the cluster configuration (config.ini) file:

虽然不是严格要求,但可以通过在群集配置(config.ini)文件的[shm default]部分中设置以下一个或多个参数来完成对群集中所有shm连接的优化:

  • ShmSize: Shared memory size

    shmsize:共享内存大小

  • ShmSpinTime: Time in µs to spin before sleeping

    shmspintime:睡觉前旋转的时间(微秒)

  • SendBufferMemory: Size of buffer for signals sent from this node, in bytes.

    sendbuffermemory:从该节点发送的信号的缓冲区大小,以字节为单位。

  • SendSignalId: Indicates that a signal ID is included in each signal sent through the transporter.

    sendSignalID:表示通过传输程序发送的每个信号中都包含一个信号ID。

  • Checksum: Indicates that a checksum is included in each signal sent through the transporter.

    校验和:表示通过传输程序发送的每个信号中都包含一个校验和。

  • PreSendChecksum: Checks of the checksum are made prior to sending the signal; Checksum must also be enabled for this to work

    presendchecksum:在发送信号之前检查校验和;还必须启用校验和才能工作

This example shows a simple setup with SHM connections definied on multiple hosts, in an NDB Cluster using 3 computers listed here by host name, hosting the node types shown:

此示例显示了一个简单的设置,在ndb集群中,使用按主机名列出的3台计算机在多个主机上定义shm连接,托管所示的节点类型:

  1. 10.0.0.0: The management server

    10.0.0.0:管理服务器

  2. 10.0.0.1: A data node and an SQL node

    10.0.0.1:数据节点和SQL节点

  3. 10.0.0.2: A data node and an SQL node

    10.0.0.2:数据节点和SQL节点

In this scenario, each data node communicates with both the management server and the other data node using TCP transporters; each SQL node uses a shared memory transporter to communicate with the data nodes that is local to it, and a TCP transporter to communicate with the remote data node. A basic configuration reflecting this setup is enabled by the config.ini file whose contents are shown here:

在此方案中,每个数据节点都使用TCP传输程序与管理服务器和其他数据节点通信;每个SQL节点使用共享内存传输程序与本地数据节点通信,使用TCP传输程序与远程数据节点通信。config.ini文件启用了反映此设置的基本配置,其内容如下所示:

[ndbd default]
DataDir=/path/to/datadir
UseShm=1

[shm default]
ShmSize=8M
ShmSpintime=200
SendBufferMemory=4M

[tcp default]
SendBufferMemory=8M

[ndb_mgmd]
NodeId=49
Hostname=10.0.0.0
DataDir=/path/to/datadir

[ndbd]
NodeId=1
Hostname=10.0.0.1
DataDir=/path/to/datadir

[ndbd]
NodeId=2
Hostname=10.0.0.2
DataDir=/path/to/datadir

[mysqld]
NodeId=51
Hostname=10.0.0.1

[mysqld]
NodeId=52
Hostname=10.0.0.2

[api]
[api]

Parameters affecting all shared memory transporters are set in the [shm default] section; these can be overridden on a per-connection basis in one or more [shm] sections. Each such section must be associated with a given SHM connection using NodeId1 and NodeId2; the values required for these parameters are the node IDs of the two nodes connected by the transporter. You can also identify the nodes by host name using HostName1 and HostName2, but these parameters are not required.

影响所有共享内存传输程序的参数在[shm default]部分中设置;这些参数可以在一个或多个[shm]部分中基于每个连接重写。每个这样的部分必须使用nodeid1和nodeid2与给定的shm连接相关联;这些参数所需的值是传输程序连接的两个节点的节点id。也可以使用hostname1和hostname2按主机名标识节点,但这些参数不是必需的。

The API nodes for which no host names are set use the TCP transporter to communicate with data nodes independent of the hosts on which they are started; the parameters and values set in the [tcp default] section of the configuration file apply to all TCP transporters in the cluster.

没有为其设置主机名的API节点使用TCP传输程序与数据节点通信,而这些数据节点与启动它们的主机无关;配置文件的[TCP默认值]部分中设置的参数和值适用于群集中的所有TCP传输程序。

For optimum performance, you can define a spin time for the SHM transporter (ShmSpinTime parameter); this affects both the data node receiver thread and the poll owner (receive thread or user thread) in NDB.

为了获得最佳性能,可以为shm transporter定义一个自旋时间(shm spin time参数);这会影响ndb中的数据节点接收器线程和轮询所有者(接收线程或用户线程)。

Restart types.  Information about the restart types used by the parameter descriptions in this section is shown in the following table:

重新启动类型。下表显示了有关本节中参数说明使用的重新启动类型的信息:

Table 21.299 NDB Cluster restart types

表21.299 ndb集群重启类型

Symbol Restart Type Description
N Node The parameter can be updated using a rolling restart (see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”)
S System All cluster nodes must be shut down completely, then restarted, to effect a change in this parameter
I Initial Data nodes must be restarted using the --initial option

  • Checksum

    校验和

    Table 21.300 This table provides type and value information for the Checksum shared memory configuration parameter

    表21.300此表提供校验和共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default true
    Range true, false
    Restart Type N

    This parameter is a boolean (Y/N) parameter which is disabled by default. When it is enabled, checksums for all messages are calculated before being placed in the send buffer.

    此参数是默认禁用的布尔(Y/N)参数。启用时,将在将所有消息放入发送缓冲区之前计算其校验和。

    This feature prevents messages from being corrupted while waiting in the send buffer. It also serves as a check against data being corrupted during transport.

    此功能可防止邮件在发送缓冲区中等待时损坏。它还可以用来检查数据在传输过程中是否损坏。

  • Group

    集团

    Table 21.301 This table provides type and value information for the Group shared memory configuration parameter

    表21.301此表提供组共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.6
    Type or units numeric
    Default 35
    Range 0-200
    Restart Type N

    Determines the group proximity; a smaller value is interpreted as being closer. The default value is sufficient for most conditions.

    确定组邻近度;较小的值被解释为更接近。默认值对于大多数情况都是足够的。

  • HostName1

    主机名1

    Table 21.302 This table provides type and value information for the HostName1 shared memory configuration parameter

    表21.302此表提供hostname1共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name or IP address
    Default [none]
    Range ...
    Restart Type N

    The HostName1 and HostName2 parameters can be used to specify specific network interfaces to be used for a given SHM connection between two nodes. The values used for these parameters can be host names or IP addresses.

    hostname1和hostname2参数可用于指定用于两个节点之间的给定shm连接的特定网络接口。用于这些参数的值可以是主机名或IP地址。

  • HostName2

    主机名2

    Table 21.303 This table provides type and value information for the HostName1 shared memory configuration parameter

    表21.303此表提供hostname1共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units name or IP address
    Default [none]
    Range ...
    Restart Type N

    The HostName1 and HostName2 parameters can be used to specify specific network interfaces to be used for a given SHM connection between two nodes. The values used for these parameters can be host names or IP addresses.

    hostname1和hostname2参数可用于指定用于两个节点之间的给定shm连接的特定网络接口。用于这些参数的值可以是主机名或IP地址。

  • NodeId1

    节点1

    Table 21.304 This table provides type and value information for the NodeId1 shared memory configuration parameter

    表21.304此表提供了nodeid1共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default [none]
    Range 1 - 255
    Restart Type N

    To identify a connection between two nodes it is necessary to provide node identifiers for each of them, as NodeId1 and NodeId2.

    要标识两个节点之间的连接,必须为每个节点提供节点标识符,如nodeid1和nodeid2。

  • NodeId2

    野田2

    Table 21.305 This table provides type and value information for the NodeId2 shared memory configuration parameter

    表21.305此表提供了nodeid2共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default [none]
    Range 1 - 255
    Restart Type N

    To identify a connection between two nodes it is necessary to provide node identifiers for each of them, as NodeId1 and NodeId2.

    要标识两个节点之间的连接,必须为每个节点提供节点标识符,如nodeid1和nodeid2。

  • NodeIdServer

    节点服务器

    Table 21.306 This table provides type and value information for the NodeIdServer shared memory configuration parameter

    表21.306此表提供nodeidserver共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units numeric
    Default [none]
    Range 1 - 63
    Restart Type N

    Identify the server end of a shared memory connection. By default, this is the node ID of the data node.

    标识共享内存连接的服务器端。默认情况下,这是数据节点的节点ID。

  • OverloadLimit

    超载限制

    Table 21.307 This table provides type and value information for the OverloadLimit shared memory configuration parameter

    表21.307此表提供重载限制共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    When more than this many unsent bytes are in the send buffer, the connection is considered overloaded.

    如果发送缓冲区中有这么多未发送字节,则认为连接已过载。

    This parameter can be used to determine the amount of unsent data that must be present in the send buffer before the connection is considered overloaded. See Section 21.3.3.13, “Configuring NDB Cluster Send Buffer Parameters”, and Section 21.5.10.44, “The ndbinfo transporters Table”, for more information.

    此参数可用于确定在连接被认为过载之前,发送缓冲区中必须存在的未发送数据量。有关更多信息,请参阅第21.3.3.13节“配置ndb集群发送缓冲区参数”和第21.5.10.44节“ndbinfo transporters表”。

  • PreSendChecksum

    前置校验和

    Table 21.308 This table provides type and value information for the PreSendChecksum shared memory configuration parameter

    表21.308此表提供presendchecksum共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.6
    Type or units boolean
    Default false
    Range true, false
    Restart Type S

    If this parameter and Checksum are both enabled, perform pre-send checksum checks, and check all SHM signals between nodes for errors. Has no effect if Checksum is not also enabled.

    如果此参数和校验和均已启用,则执行发送前校验和检查,并检查节点之间的所有SHM信号是否存在错误。如果未启用校验和,则无效。

  • SendBufferMemory

    发送缓冲存储器

    Table 21.309 This table provides type and value information for the SendBufferMemory shared memory configuration parameter

    表21.309此表提供sendbuffermemory共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.6
    Type or units integer
    Default 2M
    Range 256K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Size (in bytes) of the shared memory buffer for signals sent from this node using a shared memory connection.

    使用共享内存连接从该节点发送的信号的共享内存缓冲区大小(字节)。

  • SendSignalId

    发送信号ID

    Table 21.310 This table provides type and value information for the SendSignalId shared memory configuration parameter

    表21.310此表提供sendSignalID共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units boolean
    Default false
    Range true, false
    Restart Type N

    To retrace the path of a distributed message, it is necessary to provide each message with a unique identifier. Setting this parameter to Y causes these message IDs to be transported over the network as well. This feature is disabled by default in production builds, and enabled in -debug builds.

    要回溯分布式消息的路径,必须为每个消息提供唯一的标识符。将此参数设置为y也会导致这些消息id通过网络传输。默认情况下,此功能在生产版本中禁用,在调试版本中启用。

  • ShmKey

    什姆基

    Table 21.311 This table provides type and value information for the ShmKey shared memory configuration parameter

    表21.311此表提供shmkey共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default 0
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    When setting up shared memory segments, a node ID, expressed as an integer, is used to identify uniquely the shared memory segment to use for the communication. There is no default value. If UseShm is enabled, the shared memory key is calculated automatically by NDB.

    在设置共享内存段时,节点id(表示为整数)用于唯一标识要用于通信的共享内存段。没有默认值。如果启用useshm,则ndb会自动计算共享内存密钥。

  • ShmSize

    缩小

    Table 21.312 This table provides type and value information for the ShmSize shared memory configuration parameter

    表21.312此表提供SHMSize共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units bytes
    Default 1M
    Range 64K - 4294967039 (0xFFFFFEFF)
    Restart Type N
    Version (or later) NDB 7.6.6
    Type or units bytes
    Default 4M
    Range 64K - 4294967039 (0xFFFFFEFF)
    Restart Type N

    Each SHM connection has a shared memory segment where messages between nodes are placed by the sender and read by the reader. The size of this segment is defined by ShmSize. The default value in NDB 7.6.6 and later is 4MB.

    每个shm连接都有一个共享内存段,节点之间的消息由发送方放置并由读取器读取。此段的大小由shmsize定义。ndb 7.6.6及更高版本中的默认值是4mb。

  • ShmSpinTime

    shmspintime公司

    Table 21.313 This table provides type and value information for the ShmSpinTime shared memory configuration parameter

    表21.313此表提供shmspintime共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.6.6
    Type or units integer
    Default 0
    Range 0 - 2000
    Restart Type S

    When receiving, the time to wait before sleeping, in microseconds.

    接收时,睡眠前等待的时间,以微秒为单位。

  • SigNum

    信号

    Table 21.314 This table provides type and value information for the Signum shared memory configuration parameter

    表21.314此表提供signum共享内存配置参数的类型和值信息

    Property Value
    Version (or later) NDB 7.5.0
    Type or units unsigned
    Default [none]
    Range 0 - 4294967039 (0xFFFFFEFF)
    Restart Type N

    This parameter is no longer used as of NDB 7.6.6, and if set in that and later versions, is ignored.

    此参数从ndb 7.6.6起不再使用,如果在该版本和更高版本中设置,则忽略此参数。

    The following applies only in NDB 7.6.5 and earlier:

    以下仅适用于NDB 7.6.5及更早版本:

    When using the shared memory transporter, a process sends an operating system signal to the other process when there is new data available in the shared memory. Should that signal conflict with an existing signal, this parameter can be used to change it. This is a possibility when using SHM due to the fact that different operating systems use different signal numbers.

    当使用共享内存传输程序时,当共享内存中有新数据可用时,进程向另一进程发送操作系统信号。如果该信号与现有信号冲突,则可以使用该参数来改变该信号。由于不同的操作系统使用不同的信号号,因此在使用shm时可能会出现这种情况。

    The default value of SigNum is 0; therefore, it must be set to avoid errors in the cluster log when using the shared memory transporter. Typically, this parameter is set to 10 in the [shm default] section of the config.ini file.

    signum的默认值为0;因此,在使用共享内存传输程序时,必须设置它以避免在群集日志中出现错误。通常,此参数在config.ini文件的[shm default]部分设置为10。

21.3.3.13 Configuring NDB Cluster Send Buffer Parameters

The NDB kernel employs a unified send buffer whose memory is allocated dynamically from a pool shared by all transporters. This means that the size of the send buffer can be adjusted as necessary. Configuration of the unified send buffer can accomplished by setting the following parameters:

ndb内核使用一个统一的发送缓冲区,其内存从所有传输程序共享的池中动态分配。这意味着可以根据需要调整发送缓冲区的大小。可以通过设置以下参数来完成统一发送缓冲区的配置:

  • TotalSendBufferMemory.  This parameter can be set for all types of NDB Cluster nodes—that is, it can be set in the [ndbd], [mgm], and [api] (or [mysql]) sections of the config.ini file. It represents the total amount of memory (in bytes) to be allocated by each node for which it is set for use among all configured transporters. If set, its minimum is 256KB; the maximum is 4294967039.

    TotalSendBufferMemory。可以为所有类型的ndb集群节点设置此参数,也就是说,可以在config.ini文件的[ndbd]、[mgm]和[api]或[mysql]部分中设置此参数。它表示每个节点要分配的内存总量(字节),在所有已配置的传输程序中为每个节点设置内存以供使用。如果设定,其最小值为256kb,最大值为4294967039。

    To be backward-compatible with existing configurations, this parameter takes as its default value the sum of the maximum send buffer sizes of all configured transporters, plus an additional 32KB (one page) per transporter. The maximum depends on the type of transporter, as shown in the following table:

    为了向后兼容现有的配置,这个参数以其默认值为所有配置的运输机的最大发送缓冲器大小的总和,再加上每个运输机的额外的32 kb(一页)。最大值取决于运输机的类型,如下表所示:

    Table 21.315 Transporter types with maximum send buffer sizes

    具有最大发送缓冲区大小的表21.315类运输机类型

    Transporter Maximum Send Buffer Size (bytes)
    TCP SendBufferMemory (default = 2M)
    SHM 20K

    This enables existing configurations to function in close to the same way as they did with NDB Cluster 6.3 and earlier, with the same amount of memory and send buffer space available to each transporter. However, memory that is unused by one transporter is not available to other transporters.

    这使得现有的配置能够以与NDB集群6.3和更早的方式相同的方式运行,每个存储器具有相同数量的内存和发送缓冲空间。但是,一个传输程序未使用的内存对其他传输程序不可用。

  • OverloadLimit.  This parameter is used in the config.ini file [tcp] section, and denotes the amount of unsent data (in bytes) that must be present in the send buffer before the connection is considered overloaded. When such an overload condition occurs, transactions that affect the overloaded connection fail with NDB API Error 1218 (Send Buffers overloaded in NDB kernel) until the overload status passes. The default value is 0, in which case the effective overload limit is calculated as SendBufferMemory * 0.8 for a given connection. The maximum value for this parameter is 4G.

    超载限制。此参数用于config.ini file[tcp]部分,表示在认为连接过载之前必须存在于发送缓冲区中的未发送数据量(字节)。当出现这种重载情况时,影响重载连接的事务将失败,并出现ndb api错误1218(在ndb内核中发送缓冲区重载),直到重载状态通过为止。默认值为0,在这种情况下,有效过载限制计算为给定连接的sendbuffermemory*0.8。这个参数的最大值是4G。

  • SendBufferMemory.  This value denotes a hard limit for the amount of memory that may be used by a single transporter out of the entire pool specified by TotalSendBufferMemory. However, the sum of SendBufferMemory for all configured transporters may be greater than the TotalSendBufferMemory that is set for a given node. This is a way to save memory when many nodes are in use, as long as the maximum amount of memory is never required by all transporters at the same time.

    SendBufferMemory。此值表示TotalSendBufferMemory指定的整个池中单个传输程序可以使用的内存量的硬限制。但是,所有已配置传输程序的sendbuffermemory之和可能大于为给定节点设置的totalsendbuffermemory。这是一种在许多节点使用时节省内存的方法,只要最大的内存量不需要所有的传输者同时进行。

  • ReservedSendBufferMemory.  Removed in NDB 7.5.2.

    保留SendBufferMemory。在ndb 7.5.2中删除。

    Prior to NDB 7.5.2, this data node parameter was present, but was not actually used (Bug #77404, Bug #21280428).

    在ndb 7.5.2之前,这个数据节点参数存在,但实际上没有使用(错误77404,错误21280428)。

You can use the ndbinfo.transporters table to monitor send buffer memory usage, and to detect slowdown and overload conditions that can adversely affect performance.

可以使用ndbinfo.transporters表监视发送缓冲区内存使用情况,并检测可能对性能产生不利影响的减速和过载情况。

21.3.4 Using High-Speed Interconnects with NDB Cluster

Even before design of NDBCLUSTER began in 1996, it was evident that one of the major problems to be encountered in building parallel databases would be communication between the nodes in the network. For this reason, NDBCLUSTER was designed from the very beginning to permit the use of a number of different data transport mechanisms. In this Manual, we use the term transporter for these.

甚至在1996年开始ndbcluster的设计之前,很明显在构建并行数据库时遇到的主要问题之一就是网络中节点之间的通信。因此,ndbcluster从一开始就被设计为允许使用许多不同的数据传输机制。在本手册中,我们使用术语transporter。

The NDB Cluster codebase provides for four different transporters:

ndb集群代码库提供四种不同的传输程序:

  • TCP/IP using 100 Mbps or gigabit Ethernet, as discussed in Section 21.3.3.10, “NDB Cluster TCP/IP Connections”.

    使用100 Mbps或千兆以太网的TCP/IP,如第21.3.3.10节“NDB群集TCP/IP连接”所述。

  • Direct (machine-to-machine) TCP/IP; although this transporter uses the same TCP/IP protocol as mentioned in the previous item, it requires setting up the hardware differently and is configured differently as well. For this reason, it is considered a separate transport mechanism for NDB Cluster. See Section 21.3.3.11, “NDB Cluster TCP/IP Connections Using Direct Connections”, for details.

    直接(机器到机器)TCP/IP;尽管此传输程序使用与前一项中提到的相同的TCP/IP协议,但它需要以不同的方式设置硬件,并且配置也不同。因此,它被认为是ndb集群的独立传输机制。详见21.3.3.11节“使用直接连接的ndb集群tcp/ip连接”。

  • Shared memory (SHM). Supported in production beginning with NDB 7.6.6. For more information about SHM, see Section 21.3.3.12, “NDB Cluster Shared Memory Connections”.

    共享内存(SHM)。从ndb 7.6.6开始支持生产。有关SHM的更多信息,请参阅第21.3.3.12节“NDB群集共享内存连接”。

  • Scalable Coherent Interface (SCI).

    可扩展一致接口(SCI)。

    Note

    Using SCI transporters in NDB Cluster requires specialized hardware, software, and MySQL binaries not available using an NDB 7.5 or 7.6 distribution. See SCI Transport Connections in NDB Cluster.

    在ndb集群中使用sci transporters需要专用的硬件、软件和mysql二进制文件,而使用ndb 7.5或7.6发行版则不可用。请参阅ndb集群中的sci传输连接。

Most users today employ TCP/IP over Ethernet because it is ubiquitous. TCP/IP is also by far the best-tested transporter for use with NDB Cluster.

如今,大多数用户通过以太网使用TCP/IP,因为它无处不在。到目前为止,tcp/ip也是ndb集群使用的最佳测试传输程序。

We are working to make sure that communication with the ndbd process is made in chunks that are as large as possible because this benefits all types of data transmission.

我们正在努力确保与ndbd进程的通信以尽可能大的“块”进行,因为这有利于所有类型的数据传输。

21.4 NDB Cluster Programs

21.4.1 ndbd — The NDB Cluster Data Node Daemon
21.4.2 ndbinfo_select_all — Select From ndbinfo Tables
21.4.3 ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)
21.4.4 ndb_mgmd — The NDB Cluster Management Server Daemon
21.4.5 ndb_mgm — The NDB Cluster Management Client
21.4.6 ndb_blob_tool — Check and Repair BLOB and TEXT columns of NDB Cluster Tables
21.4.7 ndb_config — Extract NDB Cluster Configuration Information
21.4.8 ndb_cpcd — Automate Testing for NDB Development
21.4.9 ndb_delete_all — Delete All Rows from an NDB Table
21.4.10 ndb_desc — Describe NDB Tables
21.4.11 ndb_drop_index — Drop Index from an NDB Table
21.4.12 ndb_drop_table — Drop an NDB Table
21.4.13 ndb_error_reporter — NDB Error-Reporting Utility
21.4.14 ndb_import — Import CSV Data Into NDB
21.4.15 ndb_index_stat — NDB Index Statistics Utility
21.4.16 ndb_move_data — NDB Data Copy Utility
21.4.17 ndb_perror — Obtain NDB Error Message Information
21.4.18 ndb_print_backup_file — Print NDB Backup File Contents
21.4.19 ndb_print_file — Print NDB Disk Data File Contents
21.4.20 ndb_print_frag_file — Print NDB Fragment List File Contents
21.4.21 ndb_print_schema_file — Print NDB Schema File Contents
21.4.22 ndb_print_sys_file — Print NDB System File Contents
21.4.23 ndb_redo_log_reader — Check and Print Content of Cluster Redo Log
21.4.24 ndb_restore — Restore an NDB Cluster Backup
21.4.25 ndb_select_all — Print Rows from an NDB Table
21.4.26 ndb_select_count — Print Row Counts for NDB Tables
21.4.27 ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster
21.4.28 ndb_show_tables — Display List of NDB Tables
21.4.29 ndb_size.pl — NDBCLUSTER Size Requirement Estimator
21.4.30 ndb_top — View CPU usage information for NDB threads
21.4.31 ndb_waiter — Wait for NDB Cluster to Reach a Given Status
21.4.32 Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs

Using and managing an NDB Cluster requires several specialized programs, which we describe in this chapter. We discuss the purposes of these programs in an NDB Cluster, how to use the programs, and what startup options are available for each of them.

使用和管理ndb集群需要几个专门的程序,我们将在本章中介绍这些程序。我们讨论了ndb集群中这些程序的用途、如何使用这些程序以及每个程序都有哪些启动选项。

These programs include the NDB Cluster data, management, and SQL node processes (ndbd, ndbmtd, ndb_mgmd, and mysqld) and the management client (ndb_mgm).

这些程序包括ndb集群数据、管理和sql节点进程(ndbd、ndbmtd、ndb-mgmd和mysqld)和管理客户端(ndb-mgm)。

Information about the program ndb_setup.py, used to start the NDB Cluster Auto-Installer, is also included in this section. You should be aware that Section 21.4.27, “ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster”, contains information about the command-line client only; for information about using the GUI installer spawned by this program to configure and deploy an NDB Cluster, see Section 21.2.1, “The NDB Cluster Auto-Installer (NDB 7.5)”.

有关程序ndb_setup.py(用于启动ndb群集自动安装程序)的信息也包含在本节中。您应该知道,21.4.27节,“ndb_setup.py-为ndb cluster启动基于浏览器的自动安装程序”仅包含有关命令行客户端的信息;有关使用此程序生成的gui安装程序配置和部署ndb群集的信息,请参阅21.2.1节,“ndb群集自动安装程序(ndb 7.5)”。

For information about using mysqld as an NDB Cluster process, see Section 21.5.4, “MySQL Server Usage for NDB Cluster”.

有关将mysqld用作ndb集群进程的信息,请参阅21.5.4节“mysql server usage for ndb cluster”。

Other NDB utility, diagnostic, and example programs are included with the NDB Cluster distribution. These include ndb_restore, ndb_show_tables, and ndb_config. These programs are also covered in this section.

其他ndb实用程序、诊断程序和示例程序都包含在ndb集群发行版中。其中包括ndb_restore、ndb_show_tables和ndb_config。本节还将介绍这些程序。

The final portion of this section contains tables of options that are common to all the various NDB Cluster programs.

本节的最后一部分包含所有ndb集群程序通用的选项表。

21.4.1 ndbd — The NDB Cluster Data Node Daemon

ndbd is the process that is used to handle all the data in tables using the NDB Cluster storage engine. This is the process that empowers a data node to accomplish distributed transaction handling, node recovery, checkpointing to disk, online backup, and related tasks.

ndbd是使用ndb集群存储引擎处理表中所有数据的过程。这个过程使数据节点能够完成分布式事务处理、节点恢复、磁盘检查点设置、在线备份和相关任务。

In an NDB Cluster, a set of ndbd processes cooperate in handling data. These processes can execute on the same computer (host) or on different computers. The correspondences between data nodes and Cluster hosts is completely configurable.

在ndb集群中,一组ndbd进程协同处理数据。这些进程可以在同一台计算机(主机)或不同的计算机上执行。数据节点和集群主机之间的对应关系是完全可配置的。

The following table includes command options specific to the NDB Cluster data node program ndbd. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndbd), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包括特定于ndb集群数据节点程序ndbd的命令选项。其他说明见下表。有关大多数ndb群集程序(包括ndbd)的公用选项,请参阅21.4.32节,“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.316 Command-line options for the ndbd program

表21.316 ndbd程序的命令行选项

Format Description Added, Deprecated, or Removed

--bind-address=name

--bind address=名称

Local bind address

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--connect-delay=#

--连接延迟=#

Time to wait between attempts to contact a management server, in seconds; 0 means do not wait between attempts

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--connect-retries=#

--连接重试次数=#

Set the number of times to retry a connection before giving up; 0 means 1 attempt only (and no retries)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--connect-retry-delay=#

--连接重试延迟=#

Time to wait between attempts to contact a management server, in seconds; 0 means do not wait between attempts

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--daemon,

--守护进程,

-d

-丁

Start ndbd as daemon (default); override with --nodaemon

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--foreground

--前景

Run ndbd in foreground, provided for debugging purposes (implies --nodaemon)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--initial

--首字母

Perform initial start of ndbd, including cleaning the file system. Consult the documentation before using this option

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--initial-start

--初始启动

Perform partial initial start (requires --nowait-nodes)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--install[=name]

--安装[=名称]

Used to install the data node process as a Windows service. Does not apply on non-Windows platforms.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--logbuffer-size=#

--日志缓冲区大小=#

Control size of log buffer. For use when debugging with many log messages being generated; default is sufficient for normal operations.

ADDED: NDB 7.6.6

增加:NDB 7.6.6

--nostart,

--诺查特,

-n

-n个

Don't start ndbd immediately; ndbd waits for command to start from ndb_mgmd

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--nodaemon

--夜猫子

Do not start ndbd as daemon; provided for testing purposes

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--nowait-nodes=list

--nowait nodes=列表

Do not wait for these data nodes to start (takes comma-separated list of node IDs). Also requires --ndb-nodeid to be used.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--remove[=name]

--删除[=名称]

Used to remove a data node process that was previously installed as a Windows service. Does not apply on non-Windows platforms.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--verbose,

--冗长,

-v

-五

Causes the data log to write extra debugging information to the node log.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


Note

All of these options also apply to the multithreaded version of this program (ndbmtd) and you may substitute ndbmtd for ndbd wherever the latter occurs in this section.

所有这些选项也适用于此程序的多线程版本(ndbmtd),在本节中出现后者时,您可以将“ndbmtd”替换为“ndbd”。

  • --bind-address

    --绑定地址

    Property Value
    Command-Line Format --bind-address=name
    Type String
    Default Value

    Causes ndbd to bind to a specific network interface (host name or IP address). This option has no default value.

    使ndbd绑定到特定的网络接口(主机名或ip地址)。此选项没有默认值。

  • --connect-delay=#

    --连接延迟=#

    Property Value
    Command-Line Format --connect-delay=#
    Deprecated Yes
    Type Numeric
    Default Value 5
    Minimum Value 0
    Maximum Value 3600

    Determines the time to wait between attempts to contact a management server when starting (the number of attempts is controlled by the --connect-retries option). The default is 5 seconds.

    确定启动时两次尝试联系管理服务器之间的等待时间(尝试次数由--connect retries选项控制)。默认值为5秒。

    This option is deprecated, and is subject to removal in a future release of NDB Cluster. Use --connect-retry-delay instead.

    此选项已弃用,并将在将来的ndb群集版本中删除。请改用--connect retry delay。

  • --connect-retries=#

    --连接重试次数=#

    Property Value
    Command-Line Format --connect-retries=#
    Type Numeric
    Default Value 12
    Minimum Value 0
    Maximum Value 65535

    Set the number of times to retry a connection before giving up; 0 means 1 attempt only (and no retries). The default is 12 attempts. The time to wait between attempts is controlled by the --connect-retry-delay option.

    设置放弃前重试连接的次数;0表示仅尝试1次(不重试)。默认为12次尝试。两次尝试之间的等待时间由--connect retry delay选项控制。

  • --connect-retry-delay=#

    --连接重试延迟=#

    Property Value
    Command-Line Format --connect-retry-delay=#
    Type Numeric
    Default Value 5
    Minimum Value 0
    Maximum Value 4294967295

    Determines the time to wait between attempts to contact a management server when starting (the time between attempts is controlled by the --connect-retries option). The default is 5 seconds.

    确定启动时尝试联系管理服务器之间的等待时间(尝试之间的时间由--connect retries选项控制)。默认值为5秒。

    This option takes the place of the --connect-delay option, which is now deprecated and subject to removal in a future release of NDB Cluster.

    此选项取代了--connect delay选项,该选项现在已被弃用,并将在将来的ndb集群版本中删除。

  • --daemon, -d

    --守护进程,-d

    Property Value
    Command-Line Format --daemon
    Type Boolean
    Default Value TRUE

    Instructs ndbd or ndbmtd to execute as a daemon process. This is the default behavior. --nodaemon can be used to prevent the process from running as a daemon.

    指示ndbd或ndbmtd作为守护进程执行。这是默认行为。--nodaemon可用于防止进程作为守护进程运行。

    This option has no effect when running ndbd or ndbmtd on Windows platforms.

    在windows平台上运行ndbd或ndbmtd时,此选项无效。

  • --foreground

    --前景

    Property Value
    Command-Line Format --foreground
    Type Boolean
    Default Value FALSE

    Causes ndbd or ndbmtd to execute as a foreground process, primarily for debugging purposes. This option implies the --nodaemon option.

    使ndbd或ndbmtd作为前台进程执行,主要用于调试目的。此选项表示--nodaemon选项。

    This option has no effect when running ndbd or ndbmtd on Windows platforms.

    在windows平台上运行ndbd或ndbmtd时,此选项无效。

  • --initial

    --首字母

    Property Value
    Command-Line Format --initial
    Type Boolean
    Default Value FALSE

    Instructs ndbd to perform an initial start. An initial start erases any files created for recovery purposes by earlier instances of ndbd. It also re-creates recovery log files. On some operating systems, this process can take a substantial amount of time.

    指示ndbd执行初始启动。初始启动将删除先前ndbd实例为恢复目的创建的任何文件。它还重新创建恢复日志文件。在某些操作系统上,此过程可能需要大量时间。

    An --initial start is to be used only when starting the ndbd process under very special circumstances; this is because this option causes all files to be removed from the NDB Cluster file system and all redo log files to be re-created. These circumstances are listed here:

    只有在非常特殊的情况下启动ndbd进程时才使用--initial start;这是因为此选项会导致从ndb群集文件系统中删除所有文件,并重新创建所有重做日志文件。这些情况如下:

    • When performing a software upgrade which has changed the contents of any files.

      执行已更改任何文件内容的软件升级时。

    • When restarting the node with a new version of ndbd.

      使用新版本的ndbd重新启动节点时。

    • As a measure of last resort when for some reason the node restart or system restart repeatedly fails. In this case, be aware that this node can no longer be used to restore data due to the destruction of the data files.

      当由于某种原因节点重新启动或系统重新启动多次失败时,作为最后手段。在这种情况下,请注意,由于数据文件被销毁,此节点不能再用于还原数据。

    Warning

    To avoid the possibility of eventual data loss, it is recommended that you not use the --initial option together with StopOnError = 0. Instead, set StopOnError to 0 in config.ini only after the cluster has been started, then restart the data nodes normally—that is, without the --initial option. See the description of the StopOnError parameter for a detailed explanation of this issue. (Bug #24945638)

    为了避免最终数据丢失的可能性,建议不要将--initial选项与stoponerRor=0一起使用。相反,只有在启动集群之后,才在config.ini中将stoponerRor设置为0,然后正常地重新启动数据节点,即不使用--initial选项。有关此问题的详细说明,请参阅stoponerror参数的说明。(错误24945638)

    Use of this option prevents the StartPartialTimeout and StartPartitionedTimeout configuration parameters from having any effect.

    使用此选项可防止startPartialTimeout和startPartitionedTimeout配置参数产生任何影响。

    Important

    This option does not affect either of the following types of files:

    此选项不影响以下任何类型的文件:

    • Backup files that have already been created by the affected node

      受影响节点已创建的备份文件

    • NDB Cluster Disk Data files (see Section 21.5.13, “NDB Cluster Disk Data Tables”).

      ndb群集磁盘数据文件(见第21.5.13节“ndb群集磁盘数据表”)。

    This option also has no effect on recovery of data by a data node that is just starting (or restarting) from data nodes that are already running. This recovery of data occurs automatically, and requires no user intervention in an NDB Cluster that is running normally.

    此选项对刚从已运行的数据节点启动(或重新启动)的数据节点恢复数据也没有影响。这种数据恢复是自动进行的,在正常运行的ndb集群中不需要用户干预。

    It is permissible to use this option when starting the cluster for the very first time (that is, before any data node files have been created); however, it is not necessary to do so.

    允许在首次启动群集时(即在创建任何数据节点文件之前)使用此选项;但是,不必这样做。

  • --initial-start

    --初始启动

    Property Value
    Command-Line Format --initial-start
    Type Boolean
    Default Value FALSE

    This option is used when performing a partial initial start of the cluster. Each node should be started with this option, as well as --nowait-nodes.

    执行群集的部分初始启动时使用此选项。每个节点都应该使用此选项启动,以及--nowait nodes。

    Suppose that you have a 4-node cluster whose data nodes have the IDs 2, 3, 4, and 5, and you wish to perform a partial initial start using only nodes 2, 4, and 5—that is, omitting node 3:

    假设您有一个4节点集群,其数据节点具有ids 2、3、4和5,并且您希望仅使用节点2、4和5执行部分初始启动—即省略节点3:

    shell> ndbd --ndb-nodeid=2 --nowait-nodes=3 --initial-start
    shell> ndbd --ndb-nodeid=4 --nowait-nodes=3 --initial-start
    shell> ndbd --ndb-nodeid=5 --nowait-nodes=3 --initial-start
    

    When using this option, you must also specify the node ID for the data node being started with the --ndb-nodeid option.

    使用此选项时,还必须为使用--ndb node id选项启动的数据节点指定节点ID。

    Important

    Do not confuse this option with the --nowait-nodes option for ndb_mgmd, which can be used to enable a cluster configured with multiple management servers to be started without all management servers being online.

    不要将此选项与ndb_mgmd的--nowait nodes选项混淆,后者可用于在所有管理服务器都未联机的情况下启动配置有多个管理服务器的群集。

  • --install[=name]

    --安装[=名称]

    Property Value
    Command-Line Format --install[=name]
    Platform Specific Windows
    Type String
    Default Value ndbd

    Causes ndbd to be installed as a Windows service. Optionally, you can specify a name for the service; if not set, the service name defaults to ndbd. Although it is preferable to specify other ndbd program options in a my.ini or my.cnf configuration file, it is possible to use together with --install. However, in such cases, the --install option must be specified first, before any other options are given, for the Windows service installation to succeed.

    使ndbd作为windows服务安装。也可以指定服务的名称;如果未设置,则服务名称默认为ndbd。尽管最好在my.ini或my.cnf配置文件中指定其他ndbd程序选项,但可以与--install一起使用。但是,在这种情况下,必须先指定--install选项,然后再提供任何其他选项,才能成功安装Windows服务。

    It is generally not advisable to use this option together with the --initial option, since this causes the data node file system to be wiped and rebuilt every time the service is stopped and started. Extreme care should also be taken if you intend to use any of the other ndbd options that affect the starting of data nodes—including --initial-start, --nostart, and --nowait-nodes—together with --install, and you should make absolutely certain you fully understand and allow for any possible consequences of doing so.

    通常不建议将此选项与--initial选项一起使用,因为这会导致在每次停止和启动服务时清除并重新生成数据节点文件系统。如果您打算使用影响数据节点启动的任何其他ndbd选项(包括--initial start,-nostart和--nowait nodes)以及--install),也应该非常小心,并且您应该绝对确保完全理解并考虑到这样做的任何可能后果。

    The --install option has no effect on non-Windows platforms.

    --install选项对非windows平台没有影响。

  • --logbuffer-size=#

    --日志缓冲区大小=#

    Property Value
    Command-Line Format --logbuffer-size=#
    Introduced 5.7.22-ndb-7.6.6
    Type Integer
    Default Value 32768
    Minimum Value 2048
    Maximum Value 4294967295

    Sets the size of the data node log buffer. When debugging with high amounts of extra logging, it is possible for the log buffer to run out of space if there are too many log messages, in which case some log messages can be lost. This should not occur during normal operations.

    设置数据节点日志缓冲区的大小。当使用大量额外日志进行调试时,如果日志消息太多,日志缓冲区可能会耗尽空间,在这种情况下,某些日志消息可能会丢失。正常操作期间不应发生这种情况。

  • --nodaemon

    --夜猫子

    Property Value
    Command-Line Format --nodaemon
    Type Boolean
    Default Value FALSE

    Prevents ndbd or ndbmtd from executing as a daemon process. This option overrides the --daemon option. This is useful for redirecting output to the screen when debugging the binary.

    防止ndbd或ndbmtd作为守护进程执行。此选项覆盖--daemon选项。这对于调试二进制文件时将输出重定向到屏幕很有用。

    The default behavior for ndbd and ndbmtd on Windows is to run in the foreground, making this option unnecessary on Windows platforms, where it has no effect.

    ndbd和ndbmtd在windows上的默认行为是在前台运行,这使得这个选项在windows平台上不必要,因为在windows平台上它没有任何效果。

  • --nostart, -n

    --诺查特,-n

    Property Value
    Command-Line Format --nostart
    Type Boolean
    Default Value FALSE

    Instructs ndbd not to start automatically. When this option is used, ndbd connects to the management server, obtains configuration data from it, and initializes communication objects. However, it does not actually start the execution engine until specifically requested to do so by the management server. This can be accomplished by issuing the proper START command in the management client (see Section 21.5.2, “Commands in the NDB Cluster Management Client”).

    指示ndbd不要自动启动。使用此选项时,ndbd连接到管理服务器,从中获取配置数据,并初始化通信对象。但是,在管理服务器明确要求启动执行引擎之前,它不会真正启动执行引擎。这可以通过在管理客户机中发出正确的start命令来实现(请参阅21.5.2节,“ndb集群管理客户机中的命令”)。

  • --nowait-nodes=node_id_1[, node_id_2[, ...]]

    --nowait nodes=node_id_1[,node_id_2[,…]]

    Property Value
    Command-Line Format --nowait-nodes=list
    Type String
    Default Value

    This option takes a list of data nodes which for which the cluster will not wait for before starting.

    此选项获取群集在启动前不会等待的数据节点列表。

    This can be used to start the cluster in a partitioned state. For example, to start the cluster with only half of the data nodes (nodes 2, 3, 4, and 5) running in a 4-node cluster, you can start each ndbd process with --nowait-nodes=3,5. In this case, the cluster starts as soon as nodes 2 and 4 connect, and does not wait StartPartitionedTimeout milliseconds for nodes 3 and 5 to connect as it would otherwise.

    这可用于在分区状态下启动群集。例如,要在4节点集群中仅运行一半数据节点(节点2、3、4和5)的情况下启动集群,可以使用--nowait nodes=3,5启动每个ndbd进程。在这种情况下,群集在节点2和4连接后立即启动,而不会像其他情况一样等待节点3和5连接的StartPartitionedTimeout毫秒。

    If you wanted to start up the same cluster as in the previous example without one ndbd (say, for example, that the host machine for node 3 has suffered a hardware failure) then start nodes 2, 4, and 5 with --nowait-nodes=3. Then the cluster will start as soon as nodes 2, 4, and 5 connect and will not wait for node 3 to start.

    如果要在没有ndbd的情况下启动与上一个示例中相同的集群(例如,节点3的主机发生硬件故障),则使用--nowait nodes=3启动节点2、4和5。然后,只要节点2、4和5连接,集群就会启动,而不会等待节点3启动。

  • --remove[=name]

    --删除[=名称]

    Property Value
    Command-Line Format --remove[=name]
    Platform Specific Windows
    Type String
    Default Value ndbd

    Causes an ndbd process that was previously installed as a Windows service to be removed. Optionally, you can specify a name for the service to be uninstalled; if not set, the service name defaults to ndbd.

    导致删除以前作为windows服务安装的ndbd进程。也可以指定要卸载的服务的名称;如果未设置,则服务名称默认为ndbd。

    The --remove option has no effect on non-Windows platforms.

    --remove选项对非windows平台没有影响。

  • --verbose, -v

    --冗长,-v

    Causes extra debug output to be written to the node log.

    导致将额外的调试输出写入节点日志。

    In NDB 7.6.4 and later, you can also use NODELOG DEBUG ON and NODELOG DEBUG OFF to enable and disable this extra logging while the data node is running.

    在ndb 7.6.4和更高版本中,还可以使用nodelog debug on和nodelog debug off在数据节点运行时启用和禁用此额外日志记录。

ndbd generates a set of log files which are placed in the directory specified by DataDir in the config.ini configuration file.

ndbd生成一组日志文件,这些文件放在config.ini配置文件中datadir指定的目录中。

These log files are listed below. node_id is and represents the node's unique identifier. For example, ndb_2_error.log is the error log generated by the data node whose node ID is 2.

下面列出了这些日志文件。node_id是并表示节点的唯一标识符。例如,ndb_2_error.log是节点ID为2的数据节点生成的错误日志。

  • ndb_node_id_error.log is a file containing records of all crashes which the referenced ndbd process has encountered. Each record in this file contains a brief error string and a reference to a trace file for this crash. A typical entry in this file might appear as shown here:

    ndb_node_id_error.log是包含引用的ndbd进程遇到的所有崩溃的记录的文件。此文件中的每个记录都包含一个简短的错误字符串和对此崩溃的跟踪文件的引用。此文件中的典型条目可能如下所示:

    Date/Time: Saturday 30 July 2004 - 00:20:01
    Type of error: error
    Message: Internal program error (failed ndbrequire)
    Fault ID: 2341
    Problem data: DbtupFixAlloc.cpp
    Object of reference: DBTUP (Line: 173)
    ProgramName: NDB Kernel
    ProcessID: 14909
    TraceFile: ndb_2_trace.log.2
    ***EOM***
    

    Listings of possible ndbd exit codes and messages generated when a data node process shuts down prematurely can be found in Data Node Error Messages.

    在数据节点错误消息中可以找到数据节点进程过早关闭时生成的可能的NDBD退出代码和消息的列表。

    Important

    The last entry in the error log file is not necessarily the newest one (nor is it likely to be). Entries in the error log are not listed in chronological order; rather, they correspond to the order of the trace files as determined in the ndb_node_id_trace.log.next file (see below). Error log entries are thus overwritten in a cyclical and not sequential fashion.

    错误日志文件中的最后一项不一定是最新的(也不太可能是最新的)。错误日志中的条目没有按时间顺序列出;相反,它们对应于ndb_node_id_trace.log.next文件中确定的跟踪文件的顺序(见下文)。因此,错误日志条目将以循环而不是顺序的方式被覆盖。

  • ndb_node_id_trace.log.trace_id is a trace file describing exactly what happened just before the error occurred. This information is useful for analysis by the NDB Cluster development team.

    ndb_node_id_trace.log.trace_id是一个跟踪文件,准确描述了错误发生之前发生的情况。这些信息对于ndb集群开发团队的分析非常有用。

    It is possible to configure the number of these trace files that will be created before old files are overwritten. trace_id is a number which is incremented for each successive trace file.

    可以配置在覆盖旧文件之前将创建的这些跟踪文件的数量。trace_id是一个数字,每个连续的跟踪文件都会递增。

  • ndb_node_id_trace.log.next is the file that keeps track of the next trace file number to be assigned.

    ndb_node_id_trace.log.next是跟踪要分配的下一个跟踪文件号的文件。

  • ndb_node_id_out.log is a file containing any data output by the ndbd process. This file is created only if ndbd is started as a daemon, which is the default behavior.

    ndb_node_id_out.log是包含ndbd进程输出的任何数据的文件。只有当ndbd作为守护进程启动(这是默认行为)时,才会创建此文件。

  • ndb_node_id.pid is a file containing the process ID of the ndbd process when started as a daemon. It also functions as a lock file to avoid the starting of nodes with the same identifier.

    ndb_node_id.pid是一个文件,包含作为守护进程启动时ndbd进程的进程ID。它还充当一个锁文件,以避免启动具有相同标识符的节点。

  • ndb_node_id_signal.log is a file used only in debug versions of ndbd, where it is possible to trace all incoming, outgoing, and internal messages with their data in the ndbd process.

    ndb_node_id_signal.log是一个仅在ndbd的调试版本中使用的文件,在ndbd过程中可以跟踪所有传入、传出和内部消息及其数据。

It is recommended not to use a directory mounted through NFS because in some environments this can cause problems whereby the lock on the .pid file remains in effect even after the process has terminated.

建议不要使用通过nfs挂载的目录,因为在某些环境中,这可能会导致问题,即.pid文件上的锁即使在进程终止后仍然有效。

To start ndbd, it may also be necessary to specify the host name of the management server and the port on which it is listening. Optionally, one may also specify the node ID that the process is to use.

要启动ndbd,可能还需要指定管理服务器的主机名及其侦听的端口。也可以指定进程要使用的节点id。

shell> ndbd --connect-string="nodeid=2;host=ndb_mgmd.mysql.com:1186"

See Section 21.3.3.3, “NDB Cluster Connection Strings”, for additional information about this issue. Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”, describes other command-line options which can be used with ndbd. For information about data node configuration parameters, see Section 21.3.3.6, “Defining NDB Cluster Data Nodes”.

有关此问题的更多信息,请参阅第21.3.3.3节“ndb群集连接字符串”。第21.4.32节,“ndb集群程序通用选项-ndb集群程序通用选项”描述了可与ndbd一起使用的其他命令行选项。有关数据节点配置参数的信息,请参阅第21.3.3.6节“定义ndb集群数据节点”。

When ndbd starts, it actually initiates two processes. The first of these is called the angel process; its only job is to discover when the execution process has been completed, and then to restart the ndbd process if it is configured to do so. Thus, if you attempt to kill ndbd using the Unix kill command, it is necessary to kill both processes, beginning with the angel process. The preferred method of terminating an ndbd process is to use the management client and stop the process from there.

当ndbd启动时,它实际上启动了两个进程。其中的第一个称为“天使进程”;它的唯一工作是发现执行进程何时完成,然后在配置好后重新启动ndbd进程。因此,如果您试图使用unix kill命令杀死ndbd,那么有必要从angel进程开始杀死这两个进程。终止ndbd进程的首选方法是使用管理客户端并从此处停止该进程。

The execution process uses one thread for reading, writing, and scanning data, as well as all other activities. This thread is implemented asynchronously so that it can easily handle thousands of concurrent actions. In addition, a watch-dog thread supervises the execution thread to make sure that it does not hang in an endless loop. A pool of threads handles file I/O, with each thread able to handle one open file. Threads can also be used for transporter connections by the transporters in the ndbd process. In a multi-processor system performing a large number of operations (including updates), the ndbd process can consume up to 2 CPUs if permitted to do so.

执行过程使用一个线程来读取、写入和扫描数据以及所有其他活动。此线程是异步实现的,因此它可以轻松处理数千个并发操作。此外,看门狗线程监督执行线程,以确保它不会挂在一个无休止的循环中。一个线程池处理文件I/O,每个线程可以处理一个打开的文件。在ndbd进程中,线程也可用于传输程序连接。在执行大量操作(包括更新)的多处理器系统中,如果允许,ndbd进程可以消耗多达2个cpu。

For a machine with many CPUs it is possible to use several ndbd processes which belong to different node groups; however, such a configuration is still considered experimental and is not supported for MySQL 5.7 in a production setting. See Section 21.1.7, “Known Limitations of NDB Cluster”.

对于具有多个CPU的计算机,可以使用属于不同节点组的多个ndbd进程;但是,这种配置仍然被认为是实验性的,并且在生产设置中mysql 5.7不支持这种配置。见第21.1.7节“已知的ndb集群限制”。

21.4.2 ndbinfo_select_all — Select From ndbinfo Tables

ndbinfo_select_all is a client program that selects all rows and columns from one or more tables in the ndbinfo database

ndbinfo_select_all是一个客户端程序,它从ndbinfo数据库的一个或多个表中选择所有行和列

Not all ndbinfo tables available in the mysql client can be read by this program. In addition, ndbinfo_select_all can show information about some tables internal to ndbinfo which cannot be accessed using SQL, including the tables and columns metadata tables.

此程序无法读取mysql客户端中可用的所有ndbinfo表。此外,ndbinfo_select_all可以显示ndbinfo内部的一些不能使用sql访问的表的信息,包括表和列元数据表。

To select from one or more ndbinfo tables using ndbinfo_select_all, it is necessary to supply the names of the tables when invoking the program as shown here:

要使用ndbinfo_select_all从一个或多个ndbinfo表中进行选择,需要在调用程序时提供表的名称,如下所示:

shell> ndbinfo_select_all table_name1  [table_name2] [...]

For example:

例如:

shell> ndbinfo_select_all logbuffers logspaces
== logbuffers ==
node_id log_type        log_id  log_part        total   used    high
5       0       0       0       33554432        262144  0
6       0       0       0       33554432        262144  0
7       0       0       0       33554432        262144  0
8       0       0       0       33554432        262144  0
== logspaces ==
node_id log_type        log_id  log_part        total   used    high
5       0       0       0       268435456       0       0
5       0       0       1       268435456       0       0
5       0       0       2       268435456       0       0
5       0       0       3       268435456       0       0
6       0       0       0       268435456       0       0
6       0       0       1       268435456       0       0
6       0       0       2       268435456       0       0
6       0       0       3       268435456       0       0
7       0       0       0       268435456       0       0
7       0       0       1       268435456       0       0
7       0       0       2       268435456       0       0
7       0       0       3       268435456       0       0
8       0       0       0       268435456       0       0
8       0       0       1       268435456       0       0
8       0       0       2       268435456       0       0
8       0       0       3       268435456       0       0
shell>

The following table includes options that are specific to ndbinfo_select_all. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndbinfo_select_all), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndbinfo_select_all的选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndbinfo_select_all),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.317 Command-line options for the ndbinfo_select_all program

表21.317 ndbinfo_select_all程序的命令行选项

Format Description Added, Deprecated, or Removed

--delay=#

--延迟=#

Set the delay in seconds between loops. Default is 5.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--loops=#,

--循环=,

-l

-一

Set the number of times to perform the select. Default is 1.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--database=db_name,

--数据库=数据库名称,

-d

-丁

Name of the database where the table located.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--parallelism=#,

--并行度=,

-p

-第页

Set the degree of parallelism.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


  • --delay=seconds

    --延迟=秒

    Property Value
    Command-Line Format --delay=#
    Type Numeric
    Default Value 5
    Minimum Value 0
    Maximum Value MAX_INT

    This option sets the number of seconds to wait between executing loops. Has no effect if --loops is set to 0 or 1.

    此选项设置执行循环之间等待的秒数。如果--loops设置为0或1,则无效。

  • --loops=number, -l number

    --循环=数字,-l数字

    Property Value
    Command-Line Format --loops=#
    Type Numeric
    Default Value 1
    Minimum Value 0
    Maximum Value MAX_INT

    This option sets the number of times to execute the select. Use --delay to set the time between loops.

    此选项设置执行选择的次数。使用--delay设置循环之间的时间。

21.4.3 ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)

ndbmtd is a multithreaded version of ndbd, the process that is used to handle all the data in tables using the NDBCLUSTER storage engine. ndbmtd is intended for use on host computers having multiple CPU cores. Except where otherwise noted, ndbmtd functions in the same way as ndbd; therefore, in this section, we concentrate on the ways in which ndbmtd differs from ndbd, and you should consult Section 21.4.1, “ndbd — The NDB Cluster Data Node Daemon”, for additional information about running NDB Cluster data nodes that apply to both the single-threaded and multithreaded versions of the data node process.

ndbmtd是ndbd的多线程版本,使用ndbcluster存储引擎处理表中所有数据的过程。ndbmtd用于具有多个cpu核的主机。除非另有说明,否则ndbmtd的功能与ndbd相同;因此,在本节中,我们将集中讨论ndbmtd与ndbd的不同之处,您应该参考第21.4.1节“ndbd-ndb集群数据节点守护程序”,有关运行适用于数据节点进程的单线程和多线程版本的ndb集群数据节点的其他信息。

Command-line options and configuration parameters used with ndbd also apply to ndbmtd. For more information about these options and parameters, see Section 21.4.1, “ndbd — The NDB Cluster Data Node Daemon”, and Section 21.3.3.6, “Defining NDB Cluster Data Nodes”, respectively.

ndbd使用的命令行选项和配置参数也适用于ndbmtd。有关这些选项和参数的更多信息,请分别参见第21.4.1节“ndbd-ndb群集数据节点守护程序”和第21.3.3.6节“定义ndb群集数据节点”。

ndbmtd is also file system-compatible with ndbd. In other words, a data node running ndbd can be stopped, the binary replaced with ndbmtd, and then restarted without any loss of data. (However, when doing this, you must make sure that MaxNoOfExecutionThreads is set to an apppriate value before restarting the node if you wish for ndbmtd to run in multithreaded fashion.) Similarly, an ndbmtd binary can be replaced with ndbd simply by stopping the node and then starting ndbd in place of the multithreaded binary. It is not necessary when switching between the two to start the data node binary using --initial.

ndbmtd也是与ndbd兼容的文件系统。换句话说,可以停止运行ndbd的数据节点,将二进制文件替换为ndbmtd,然后在不丢失任何数据的情况下重新启动。(但是,在执行此操作时,如果希望ndbmtd以多线程方式运行,则必须确保在重新启动节点之前将maxNoofExecutionThreads设置为appPriate值。)类似地,ndbmtd二进制文件可以用ndbd替换,只需停止节点,然后启动ndbd来代替多线程二进制文件。在两者之间切换时,不必使用--initial启动数据节点二进制文件。

Using ndbmtd differs from using ndbd in two key respects:

使用ndbmtd与使用ndbd在两个关键方面有所不同:

  1. Because ndbmtd runs by default in single-threaded mode (that is, it behaves like ndbd), you must configure it to use multiple threads. This can be done by setting an appropriate value in the config.ini file for the MaxNoOfExecutionThreads configuration parameter or the ThreadConfig configuration parameter. Using MaxNoOfExecutionThreads is simpler, but ThreadConfig offers more flexibility. For more information about these configuration parameters and their use, see Multi-Threading Configuration Parameters (ndbmtd).

    由于ndbmtd默认以单线程模式运行(即,它的行为类似于ndbd),因此必须将其配置为使用多个线程。这可以通过在config.ini文件中为maxNoofExecutionThreads配置参数或threadconfig配置参数设置适当的值来完成。使用Max NoFiffExchange线程更简单,但TraceCopFIG提供了更多的灵活性。有关这些配置参数及其使用的详细信息,请参阅多线程配置参数(ndbmtd)。

  2. Trace files are generated by critical errors in ndbmtd processes in a somewhat different fashion from how these are generated by ndbd failures. These differences are discussed in more detail in the next few paragraphs.

    跟踪文件是由ndbmtd进程中的关键错误生成的,其方式与ndbd故障生成这些文件的方式有些不同。下面几段将更详细地讨论这些差异。

Like ndbd, ndbmtd generates a set of log files which are placed in the directory specified by DataDir in the config.ini configuration file. Except for trace files, these are generated in the same way and have the same names as those generated by ndbd.

与ndbd一样,ndbmtd生成一组日志文件,这些文件放置在config.ini配置文件中datadir指定的目录中。除了跟踪文件外,这些文件的生成方式和名称与ndbd生成的文件相同。

In the event of a critical error, ndbmtd generates trace files describing what happened just prior to the error' occurrence. These files, which can be found in the data node's DataDir, are useful for analysis of problems by the NDB Cluster Development and Support teams. One trace file is generated for each ndbmtd thread. The names of these files have the following pattern:

在发生严重错误的情况下,ndbmtd生成跟踪文件,描述错误发生之前发生的情况。这些文件可以在数据节点的datadir中找到,对于ndb集群开发和支持团队分析问题非常有用。为每个ndbmtd线程生成一个跟踪文件。这些文件的名称具有以下模式:

ndb_node_id_trace.log.trace_id_tthread_id,

In this pattern, node_id stands for the data node's unique node ID in the cluster, trace_id is a trace sequence number, and thread_id is the thread ID. For example, in the event of the failure of an ndbmtd process running as an NDB Cluster data node having the node ID 3 and with MaxNoOfExecutionThreads equal to 4, four trace files are generated in the data node's data directory. If the is the first time this node has failed, then these files are named ndb_3_trace.log.1_t1, ndb_3_trace.log.1_t2, ndb_3_trace.log.1_t3, and ndb_3_trace.log.1_t4. Internally, these trace files follow the same format as ndbd trace files.

在这种模式中,node_id代表数据节点在集群中的唯一节点id,trace_id是一个跟踪序列号,thread_id是线程id。例如,如果作为节点id为3且maxnoofexecutionthreads等于4的ndb集群数据节点运行的ndbmtd进程失败,则在数据节点的数据目录。如果是此节点第一次失败,则这些文件命名为ndb_3_trace.log.1_t1、ndb_3_trace.log.1_t2、ndb_3_trace.log.1_t3和ndb_3_trace.log.1_t4。在内部,这些跟踪文件遵循与ndbd跟踪文件相同的格式。

The ndbd exit codes and messages that are generated when a data node process shuts down prematurely are also used by ndbmtd. See Data Node Error Messages, for a listing of these.

NBDD退出代码和在数据节点进程过早关闭时生成的消息也被NDMTMD使用。有关这些错误的列表,请参见数据节点错误消息。

Note

It is possible to use ndbd and ndbmtd concurrently on different data nodes in the same NDB Cluster. However, such configurations have not been tested extensively; thus, we cannot recommend doing so in a production setting at this time.

可以在同一ndb集群中的不同数据节点上同时使用ndbd和ndbmtd。然而,此类配置尚未经过广泛的测试;因此,我们目前不建议在生产环境中这样做。

21.4.4 ndb_mgmd — The NDB Cluster Management Server Daemon

The management server is the process that reads the cluster configuration file and distributes this information to all nodes in the cluster that request it. It also maintains a log of cluster activities. Management clients can connect to the management server and check the cluster's status.

管理服务器是读取群集配置文件并将此信息分发给群集中请求它的所有节点的进程。它还维护集群活动的日志。管理客户端可以连接到管理服务器并检查群集的状态。

The following table includes options that are specific to the NDB Cluster management server program ndb_mgmd. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_mgmd), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包括特定于ndb cluster management server程序ndb_mgmd的选项。其他说明见下表。有关大多数ndb群集程序(包括ndb-mgmd)的公用选项,请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.318 Command-line options for the ndb_mgmd program

表21.318 ndb-mgmd程序的命令行选项

Format Description Added, Deprecated, or Removed

--bind-address=host

--绑定地址=主机

Local bind address

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--config-cache[=TRUE|FALSE]

--配置缓存[=真假]

Enable the management server configuration cache; TRUE by default.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--config-file=file (>=),

--配置文件=文件(>=),

-f (>=)

-F(>=)

Specify the cluster configuration file; in NDB-6.4.0 and later, needs --reload or --initial to override configuration cache if present

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--configdir=directory,

--configDir=目录,

--config-dir=directory (>=7.0.8)

--配置目录=目录(>=7.0.8)

Specify the cluster management server's configuration cache directory

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--daemon,

--守护进程,

-d

-丁

Run ndb_mgmd in daemon mode (default)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--initial

--首字母

Causes the management server reload its configuration data from the configuration file, bypassing the configuration cache

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--install[=name]

--安装[=名称]

Used to install the management server process as a Windows service. Does not apply on non-Windows platforms.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--interactive

--互动的

Run ndb_mgmd in interactive mode (not officially supported in production; for testing purposes only)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--log-name=name

--log name=名称

A name to use when writing messages applying to this node in the cluster log.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--mycnf

--MyCNF公司

Read cluster configuration data from the my.cnf file

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--no-nodeid-checks

--没有nodeid检查

Do not provide any node id checks

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--nodaemon

--夜猫子

Do not run ndb_mgmd as a daemon

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--nowait-nodes=list

--nowait nodes=列表

Do not wait for these management nodes when starting this management server. Also requires --ndb-nodeid to be used.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--print-full-config,

--打印完整配置,

-P

-第页

Print full configuration and exit

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--reload

--重新加载

Causes the management server to compare the configuration file with its configuration cache

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--remove[=name]

--删除[=名称]

Used to remove a management server process that was previously installed as a Windows service, optionally specifying the name of the service to be removed. Does not apply on non-Windows platforms.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--verbose,

--冗长,

-v

-五

Write additional information to the log.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


  • --bind-address=host

    --绑定地址=主机

    Property Value
    Command-Line Format --bind-address=host
    Type String
    Default Value [none]

    Causes the management server to bind to a specific network interface (host name or IP address). This option has no default value.

    使管理服务器绑定到特定的网络接口(主机名或IP地址)。此选项没有默认值。

  • --config-cache

    --配置缓存

    Property Value
    Command-Line Format --config-cache[=TRUE|FALSE]
    Type Boolean
    Default Value TRUE

    This option, whose default value is 1 (or TRUE, or ON), can be used to disable the management server's configuration cache, so that it reads its configuration from config.ini every time it starts (see Section 21.3.3, “NDB Cluster Configuration Files”). You can do this by starting the ndb_mgmd process with any one of the following options:

    此选项的默认值为1(或true或on),可用于禁用管理服务器的配置缓存,以便它每次启动时都从config.ini读取其配置(请参阅第21.3.3节“NDB群集配置文件”)。可以使用以下任一选项启动ndb_mgmd进程来完成此操作:

    • --config-cache=0

      --配置缓存=0

    • --config-cache=FALSE

      --配置缓存=false

    • --config-cache=OFF

      --配置缓存=关闭

    • --skip-config-cache

      --跳过配置缓存

    Using one of the options just listed is effective only if the management server has no stored configuration at the time it is started. If the management server finds any configuration cache files, then the --config-cache option or the --skip-config-cache option is ignored. Therefore, to disable configuration caching, the option should be used the first time that the management server is started. Otherwise—that is, if you wish to disable configuration caching for a management server that has already created a configuration cache—you must stop the management server, delete any existing configuration cache files manually, then restart the management server with --skip-config-cache (or with --config-cache set equal to 0, OFF, or FALSE).

    仅当管理服务器在启动时没有存储的配置时,使用刚才列出的选项之一才有效。如果管理服务器找到任何配置缓存文件,则忽略--config cache选项或--skip config cache选项。因此,要禁用配置缓存,应在管理服务器首次启动时使用该选项。否则,如果您希望禁用已经创建配置缓存的管理服务器的配置缓存,则必须停止管理服务器,手动删除任何现有配置缓存文件,然后用跳过配置缓存(或配置为等于0、OFF或FALSE)的管理服务器重新启动管理服务器。

    Configuration cache files are normally created in a directory named mysql-cluster under the installation directory (unless this location has been overridden using the --configdir option). Each time the management server updates its configuration data, it writes a new cache file. The files are named sequentially in order of creation using the following format:

    配置缓存文件通常在安装目录下名为mysql cluster的目录中创建(除非已使用--configdir选项覆盖此位置)。管理服务器每次更新其配置数据时,都会写入一个新的缓存文件。文件按创建顺序顺序命名,格式如下:

    ndb_node-id_config.bin.seq-number
    

    node-id is the management server's node ID; seq-number is a sequence number, beginning with 1. For example, if the management server's node ID is 5, then the first three configuration cache files would, when they are created, be named ndb_5_config.bin.1, ndb_5_config.bin.2, and ndb_5_config.bin.3.

    node id是管理服务器的节点id;seq number是以1开头的序列号。例如,如果管理服务器的节点ID为5,则在创建前三个配置缓存文件时,它们将命名为ndb_5_config.bin.1、ndb_5_config.bin.2和ndb_5_config.bin.3。

    If your intent is to purge or reload the configuration cache without actually disabling caching, you should start ndb_mgmd with one of the options --reload or --initial instead of --skip-config-cache.

    如果您的目的是清除或重新加载配置缓存而不实际禁用缓存,则应使用以下选项之一启动ndb_mgmd:reload或initial,而不是skip config cache。

    To re-enable the configuration cache, simply restart the management server, but without the --config-cache or --skip-config-cache option that was used previously to disable the configuration cache.

    要重新启用配置缓存,只需重新启动管理服务器,但不使用先前用于禁用配置缓存的--config cache或--skip config cache选项。

    ndb_mgmd does not check for the configuration directory (--configdir) or attempts to create one when --skip-config-cache is used. (Bug #13428853)

    使用--skip config cache时,ndb_mgmd不检查配置目录(--configdir)或尝试创建配置目录。(错误13428853)

  • --config-file=filename, -f filename

    --配置文件=文件名,-f文件名

    Property Value
    Command-Line Format --config-file=file
    Type File name
    Default Value [none]

    Instructs the management server as to which file it should use for its configuration file. By default, the management server looks for a file named config.ini in the same directory as the ndb_mgmd executable; otherwise the file name and location must be specified explicitly.

    指示管理服务器应将哪个文件用于其配置文件。默认情况下,管理服务器在与ndb_mgmd可执行文件相同的目录中查找名为config.ini的文件;否则,必须显式指定文件名和位置。

    This option has no default value, and is ignored unless the management server is forced to read the configuration file, either because ndb_mgmd was started with the --reload or --initial option, or because the management server could not find any configuration cache. This option is also read if ndb_mgmd was started with --config-cache=OFF. See Section 21.3.3, “NDB Cluster Configuration Files”, for more information.

    此选项没有默认值,除非强制管理服务器读取配置文件,否则将忽略此选项,原因可能是ndb_mgmd是使用--reload或--initial选项启动的,或者是管理服务器找不到任何配置缓存。如果ndb_mgmd启动时--config cache=off,也会读取此选项。有关更多信息,请参阅21.3.3节“ndb群集配置文件”。

  • --configdir=dir_name

    --configDir=目录名

    Property Value
    Command-Line Format

    --configdir=directory

    --configDir=目录

    --config-dir=directory

    --config dir=目录

    Type File name
    Default Value $INSTALLDIR/mysql-cluster

    Specifies the cluster management server's configuration cache directory. --config-dir is an alias for this option.

    指定群集管理服务器的配置缓存目录。--config dir是此选项的别名。

  • --daemon, -d

    --守护进程,-d

    Property Value
    Command-Line Format --daemon
    Type Boolean
    Default Value TRUE

    Instructs ndb_mgmd to start as a daemon process. This is the default behavior.

    指示ndb_mgmd作为守护进程启动。这是默认行为。

    This option has no effect when running ndb_mgmd on Windows platforms.

    在Windows平台上运行ndb-mgmd时,此选项无效。

  • --initial

    --首字母

    Property Value
    Command-Line Format --initial
    Type Boolean
    Default Value FALSE

    Configuration data is cached internally, rather than being read from the cluster global configuration file each time the management server is started (see Section 21.3.3, “NDB Cluster Configuration Files”). Using the --initial option overrides this behavior, by forcing the management server to delete any existing cache files, and then to re-read the configuration data from the cluster configuration file and to build a new cache.

    配置数据在内部缓存,而不是每次启动管理服务器时从集群全局配置文件中读取(请参阅第21.3.3节“ndb集群配置文件”)。使用-RealEnter选项通过强制管理服务器删除任何现有缓存文件,然后重新从群集配置文件重新读取配置数据,并构建新的缓存,从而重写此行为。

    This differs in two ways from the --reload option. First, --reload forces the server to check the configuration file against the cache and reload its data only if the contents of the file are different from the cache. Second, --reload does not delete any existing cache files.

    这在两个方面与--reload选项不同。首先,--reload强制服务器根据缓存检查配置文件,并且仅当文件内容与缓存不同时才重新加载其数据。第二,重新加载不会删除任何现有的缓存文件。

    If ndb_mgmd is invoked with --initial but cannot find a global configuration file, the management server cannot start.

    如果使用--initial调用ndb_mgmd,但找不到全局配置文件,则管理服务器无法启动。

    When a management server starts, it checks for another management server in the same NDB Cluster and tries to use the other management server's configuration data. This behavior has implications when performing a rolling restart of an NDB Cluster with multiple management nodes. See Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”, for more information.

    当管理服务器启动时,它会检查同一个ndb集群中的另一个管理服务器,并尝试使用另一个管理服务器的配置数据。当对具有多个管理节点的ndb集群执行滚动重新启动时,此行为会产生影响。有关更多信息,请参阅第21.5.5节“执行ndb集群的滚动重启”。

    When used together with the --config-file option, the cache is cleared only if the configuration file is actually found.

    与--config file选项一起使用时,只有在实际找到配置文件时才清除缓存。

  • --install[=name]

    --安装[=名称]

    Property Value
    Command-Line Format --install[=name]
    Platform Specific Windows
    Type String
    Default Value ndb_mgmd

    Causes ndb_mgmd to be installed as a Windows service. Optionally, you can specify a name for the service; if not set, the service name defaults to ndb_mgmd. Although it is preferable to specify other ndb_mgmd program options in a my.ini or my.cnf configuration file, it is possible to use them together with --install. However, in such cases, the --install option must be specified first, before any other options are given, for the Windows service installation to succeed.

    使ndb_mgmd作为windows服务安装。也可以指定服务的名称;如果未设置,则服务名称默认为ndb_mgmd。尽管最好在my.ini或my.cnf配置文件中指定其他ndb_mgmd程序选项,但可以将它们与--install一起使用。但是,在这种情况下,必须先指定--install选项,然后再提供任何其他选项,才能成功安装Windows服务。

    It is generally not advisable to use this option together with the --initial option, since this causes the configuration cache to be wiped and rebuilt every time the service is stopped and started. Care should also be taken if you intend to use any other ndb_mgmd options that affect the starting of the management server, and you should make absolutely certain you fully understand and allow for any possible consequences of doing so.

    通常不建议将此选项与--initial选项一起使用,因为这会导致配置缓存在每次服务停止和启动时都被擦除和重建。如果您打算使用影响管理服务器启动的任何其他ndb_mgmd选项,也应小心,并且您应确保完全理解并考虑到这样做的任何可能后果。

    The --install option has no effect on non-Windows platforms.

    --install选项对非windows平台没有影响。

  • --interactive

    --互动的

    Property Value
    Command-Line Format --interactive
    Type Boolean
    Default Value FALSE

    Starts ndb_mgmd in interactive mode; that is, an ndb_mgm client session is started as soon as the management server is running. This option does not start any other NDB Cluster nodes.

    以交互模式启动ndb_mgmd;也就是说,管理服务器一运行,ndb_mgm客户端会话就会启动。此选项不会启动任何其他ndb群集节点。

  • --log-name=name

    --log name=名称

    Property Value
    Command-Line Format --log-name=name
    Type String
    Default Value MgmtSrvr

    Provides a name to be used for this node in the cluster log.

    在群集日志中提供用于此节点的名称。

  • --mycnf

    --MyCNF公司

    Property Value
    Command-Line Format --mycnf
    Type Boolean
    Default Value FALSE

    Read configuration data from the my.cnf file.

    从my.cnf文件中读取配置数据。

  • --no-nodeid-checks

    --没有nodeid检查

    Property Value
    Command-Line Format --no-nodeid-checks
    Type Boolean
    Default Value FALSE

    Do not perform any checks of node IDs.

    不要对节点ID执行任何检查。

  • --nodaemon

    --夜猫子

    Property Value
    Command-Line Format --nodaemon
    Type Boolean
    Default Value FALSE

    Instructs ndb_mgmd not to start as a daemon process.

    指示ndb_mgmd不要作为守护进程启动。

    The default behavior for ndb_mgmd on Windows is to run in the foreground, making this option unnecessary on Windows platforms.

    ndb_mgmd在windows上的默认行为是在前台运行,这使得此选项在windows平台上不必要。

  • --nowait-nodes

    --nowait节点

    Property Value
    Command-Line Format --nowait-nodes=list
    Type Numeric
    Default Value
    Minimum Value 1
    Maximum Value 255

    When starting an NDB Cluster is configured with two management nodes, each management server normally checks to see whether the other ndb_mgmd is also operational and whether the other management server's configuration is identical to its own. However, it is sometimes desirable to start the cluster with only one management node (and perhaps to allow the other ndb_mgmd to be started later). This option causes the management node to bypass any checks for any other management nodes whose node IDs are passed to this option, permitting the cluster to start as though configured to use only the management node that was started.

    当启动一个具有两个管理节点的ndb集群时,每个管理服务器通常检查另一个ndb_mgmd是否也在运行,以及另一个管理服务器的配置是否与其自身的配置相同。但是,有时只需要使用一个管理节点来启动集群(并且可能允许以后启动另一个ndb_mgmd)。此选项会导致管理节点绕过对其节点ID传递到此选项的任何其他管理节点的任何检查,从而允许群集启动,就像配置为仅使用已启动的管理节点一样。

    For purposes of illustration, consider the following portion of a config.ini file (where we have omitted most of the configuration parameters that are not relevant to this example):

    为了便于说明,请考虑config.ini文件的以下部分(其中我们省略了与此示例无关的大多数配置参数):

    [ndbd]
    NodeId = 1
    HostName = 198.51.100.101
    
    [ndbd]
    NodeId = 2
    HostName = 198.51.100.102
    
    [ndbd]
    NodeId = 3
    HostName = 198.51.100.103
    
    [ndbd]
    NodeId = 4
    HostName = 198.51.100.104
    
    [ndb_mgmd]
    NodeId = 10
    HostName = 198.51.100.150
    
    [ndb_mgmd]
    NodeId = 11
    HostName = 198.51.100.151
    
    [api]
    NodeId = 20
    HostName = 198.51.100.200
    
    [api]
    NodeId = 21
    HostName = 198.51.100.201
    

    Assume that you wish to start this cluster using only the management server having node ID 10 and running on the host having the IP address 198.51.100.150. (Suppose, for example, that the host computer on which you intend to the other management server is temporarily unavailable due to a hardware failure, and you are waiting for it to be repaired.) To start the cluster in this way, use a command line on the machine at 198.51.100.150 to enter the following command:

    假设您希望仅使用节点ID为10且在IP地址为198.51.100.150的主机上运行的管理服务器启动此群集。(例如,假设由于硬件故障,您要连接到另一个管理服务器的主机暂时不可用,并且您正在等待修复。)要以这种方式启动群集,请在198.51.100.150处的计算机上使用命令行输入以下命令:

    shell> ndb_mgmd --ndb-nodeid=10 --nowait-nodes=11
    

    As shown in the preceding example, when using --nowait-nodes, you must also use the --ndb-nodeid option to specify the node ID of this ndb_mgmd process.

    如前一个示例所示,在使用--nowait nodes时,还必须使用--ndb node id选项指定此ndb-mgmd进程的节点id。

    You can then start each of the cluster's data nodes in the usual way. If you wish to start and use the second management server in addition to the first management server at a later time without restarting the data nodes, you must start each data node with a connection string that references both management servers, like this:

    然后可以用通常的方式启动集群的每个数据节点。如果希望在以后不重新启动数据节点的情况下启动并使用第一个管理服务器之外的第二个管理服务器,则必须使用引用两个管理服务器的连接字符串来启动每个数据节点,如下所示:

    shell> ndbd -c 198.51.100.150,198.51.100.151
    

    The same is true with regard to the connection string used with any mysqld processes that you wish to start as NDB Cluster SQL nodes connected to this cluster. See Section 21.3.3.3, “NDB Cluster Connection Strings”, for more information.

    对于任何mysqld进程使用的连接字符串也是如此,您希望将其作为连接到此群集的ndb cluster sql节点启动。有关更多信息,请参见第21.3.3.3节“ndb集群连接字符串”。

    When used with ndb_mgmd, this option affects the behavior of the management node with regard to other management nodes only. Do not confuse it with the --nowait-nodes option used with ndbd or ndbmtd to permit a cluster to start with fewer than its full complement of data nodes; when used with data nodes, this option affects their behavior only with regard to other data nodes.

    与ndb_mgmd一起使用时,此选项仅影响管理节点相对于其他管理节点的行为。不要将其与用于ndbd或ndbmtd的--nowait nodes选项混淆,以允许群集以少于其全部数据节点的数量开始;当与数据节点一起使用时,此选项仅影响其与其他数据节点相关的行为。

    Multiple management node IDs may be passed to this option as a comma-separated list. Each node ID must be no less than 1 and no greater than 255. In practice, it is quite rare to use more than two management servers for the same NDB Cluster (or to have any need for doing so); in most cases you need to pass to this option only the single node ID for the one management server that you do not wish to use when starting the cluster.

    可以将多个管理节点ID作为逗号分隔列表传递给此选项。每个节点ID必须不小于1且不大于255。实际上,为同一个ndb集群使用两个以上的管理服务器是很少见的(或者需要这样做);在大多数情况下,您只需要将启动集群时不希望使用的一个管理服务器的单个节点id传递给此选项。

    Note

    When you later start the missing management server, its configuration must match that of the management server that is already in use by the cluster. Otherwise, it fails the configuration check performed by the existing management server, and does not start.

    以后启动“丢失的”管理服务器时,其配置必须与群集已在使用的管理服务器的配置匹配。否则,它将不执行由现有管理服务器执行的配置检查,并且不会启动。

  • --print-full-config, -P

    --打印完整配置,-p

    Property Value
    Command-Line Format --print-full-config
    Type Boolean
    Default Value FALSE

    Shows extended information regarding the configuration of the cluster. With this option on the command line the ndb_mgmd process prints information about the cluster setup including an extensive list of the cluster configuration sections as well as parameters and their values. Normally used together with the --config-file (-f) option.

    显示有关群集配置的扩展信息。使用命令行上的此选项,ndb_mgmd进程将打印有关群集设置的信息,包括群集配置部分以及参数及其值的广泛列表。通常与--config file(-f)选项一起使用。

  • --reload

    --重新加载

    Property Value
    Command-Line Format --reload
    Type Boolean
    Default Value FALSE

    NDB Cluster configuration data is stored internally rather than being read from the cluster global configuration file each time the management server is started (see Section 21.3.3, “NDB Cluster Configuration Files”). Using this option forces the management server to check its internal data store against the cluster configuration file and to reload the configuration if it finds that the configuration file does not match the cache. Existing configuration cache files are preserved, but not used.

    每次启动管理服务器时,ndb群集配置数据都存储在内部,而不是从群集全局配置文件中读取(请参阅21.3.3节“ndb群集配置文件”)。使用此选项将强制管理服务器对照群集配置文件检查其内部数据存储,并在发现配置文件与缓存不匹配时重新加载配置。现有的配置缓存文件被保存,但未使用。

    This differs in two ways from the --initial option. First, --initial causes all cache files to be deleted. Second, --initial forces the management server to re-read the global configuration file and construct a new cache.

    这在两个方面与--initial选项不同。首先,--initial将删除所有缓存文件。其次,-initial强制管理服务器重新读取全局配置文件并构造新缓存。

    If the management server cannot find a global configuration file, then the --reload option is ignored.

    如果管理服务器找不到全局配置文件,则忽略--reload选项。

    When --reload is used, the management server must be able to communicate with data nodes and any other management servers in the cluster before it attempts to read the global configuration file; otherwise, the management server fails to start. This can happen due to changes in the networking environment, such as new IP addresses for nodes or an altered firewall configuration. In such cases, you must use --initial instead to force the exsiting cached configuration to be discarded and reloaded from the file. See Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”, for additional information.

    使用--reload时,管理服务器必须能够在尝试读取全局配置文件之前与数据节点和群集中的任何其他管理服务器通信;否则,管理服务器将无法启动。这可能是由于网络环境的更改,例如节点的新IP地址或更改的防火墙配置。在这种情况下,必须使用--initial来强制放弃现有缓存配置并从文件中重新加载。有关更多信息,请参阅第21.5.5节“执行ndb集群的滚动重启”。

  • --remove{=name]

    --删除{=名称]

    Property Value
    Command-Line Format --remove[=name]
    Platform Specific Windows
    Type String
    Default Value ndb_mgmd

    Remove a management server process that has been installed as a Windows service, optionally specifying the name of the service to be removed. Applies only to Windows platforms.

    删除已作为Windows服务安装的管理服务器进程,可以选择指定要删除的服务的名称。仅适用于Windows平台。

  • --verbose, -v

    --冗长,-v

    Property Value
    Command-Line Format --verbose
    Type Boolean
    Default Value FALSE

    Remove a management server process that has been installed as a Windows service, optionally specifying the name of the service to be removed. Applies only to Windows platforms.

    删除已作为Windows服务安装的管理服务器进程,可以选择指定要删除的服务的名称。仅适用于Windows平台。

It is not strictly necessary to specify a connection string when starting the management server. However, if you are using more than one management server, a connection string should be provided and each node in the cluster should specify its node ID explicitly.

在启动管理服务器时,不必严格指定连接字符串。但是,如果使用多个管理服务器,则应提供连接字符串,并且群集中的每个节点都应显式指定其节点ID。

See Section 21.3.3.3, “NDB Cluster Connection Strings”, for information about using connection strings. Section 21.4.4, “ndb_mgmd — The NDB Cluster Management Server Daemon”, describes other options for ndb_mgmd.

有关使用连接字符串的信息,请参见第21.3.3.3节“ndb集群连接字符串”。第21.4.4节“ndb_mgmd-ndb群集管理服务器守护程序”描述了ndb_mgmd的其他选项。

The following files are created or used by ndb_mgmd in its starting directory, and are placed in the DataDir as specified in the config.ini configuration file. In the list that follows, node_id is the unique node identifier.

以下文件由ndb_mgmd在其起始目录中创建或使用,并按照config.ini配置文件中的指定放置在datadir中。在下面的列表中,node_id是唯一的节点标识符。

  • config.ini is the configuration file for the cluster as a whole. This file is created by the user and read by the management server. Section 21.3, “Configuration of NDB Cluster”, discusses how to set up this file.

    config.ini是整个集群的配置文件。此文件由用户创建并由管理服务器读取。第21.3节“ndb集群的配置”讨论了如何设置此文件。

  • ndb_node_id_cluster.log is the cluster events log file. Examples of such events include checkpoint startup and completion, node startup events, node failures, and levels of memory usage. A complete listing of cluster events with descriptions may be found in Section 21.5, “Management of NDB Cluster”.

    ndb_node_id_cluster.log是群集事件日志文件。此类事件的示例包括检查点启动和完成、节点启动事件、节点故障和内存使用级别。集群事件的完整列表和描述可在21.5节“ndb集群的管理”中找到。

    By default, when the size of the cluster log reaches one million bytes, the file is renamed to ndb_node_id_cluster.log.seq_id, where seq_id is the sequence number of the cluster log file. (For example: If files with the sequence numbers 1, 2, and 3 already exist, the next log file is named using the number 4.) You can change the size and number of files, and other characteristics of the cluster log, using the LogDestination configuration parameter.

    默认情况下,当集群日志的大小达到一百万字节时,该文件将重命名为ndb_node_id_cluster.log.seq_id,其中seq_id是集群日志文件的序列号。(例如:如果序列号1, 2和3的文件已经存在,则使用编号4命名下一个日志文件。)可以使用LogOutlook配置参数更改文件的大小和数量以及群集日志的其他特性。

  • ndb_node_id_out.log is the file used for stdout and stderr when running the management server as a daemon.

    ndb_node_id_out.log是作为后台程序运行管理服务器时用于stdout和stderr的文件。

  • ndb_node_id.pid is the process ID file used when running the management server as a daemon.

    ndb_node_id.pid是作为后台程序运行管理服务器时使用的进程ID文件。

21.4.5 ndb_mgm — The NDB Cluster Management Client

The ndb_mgm management client process is actually not needed to run the cluster. Its value lies in providing a set of commands for checking the cluster's status, starting backups, and performing other administrative functions. The management client accesses the management server using a C API. Advanced users can also employ this API for programming dedicated management processes to perform tasks similar to those performed by ndb_mgm.

运行群集实际上不需要ndb-mgm管理客户端进程。它的价值在于提供一组用于检查群集状态、启动备份和执行其他管理功能的命令。管理客户端使用c api访问管理服务器。高级用户还可以使用此api来编程专用管理流程,以执行类似于ndb_-mgm执行的任务。

To start the management client, it is necessary to supply the host name and port number of the management server:

要启动管理客户端,需要提供管理服务器的主机名和端口号:

shell> ndb_mgm [host_name [port_num]]

For example:

例如:

shell> ndb_mgm ndb_mgmd.mysql.com 1186

The default host name and port number are localhost and 1186, respectively.

默认的主机名和端口号分别是localhost和1186。

The following table includes options that are specific to the NDB Cluster management client program ndb_mgm. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_mgm), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包括特定于ndb cluster management client program ndb_mgm的选项。其他说明见下表。有关大多数ndb群集程序(包括ndb-mgm)的公用选项,请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.319 Command-line options for the ndb_mgm program

表21.319 ndb-mgm程序的命令行选项

Format Description Added, Deprecated, or Removed

--try-reconnect=#,

--尝试重新连接=,

-t

-T型

Set the number of times to retry a connection before giving up; synonym for --connect-retries

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--execute=name,

--execute=名称,

-e

-E类

Execute command and exit

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


  • --connect-retries=#

    --连接重试次数=#

    Property Value
    Command-Line Format --connect-retries=#
    Type Numeric
    Default Value 3
    Minimum Value 0
    Maximum Value 4294967295

    This option specifies the number of times following the first attempt to retry a connection before giving up (the client always tries the connection at least once). The length of time to wait per attempt is set using --connect-retry-delay.

    此选项指定在放弃前第一次尝试重试连接后的次数(客户端总是至少尝试一次连接)。每次尝试等待的时间长度是使用--connect retry delay设置的。

    This option is synonymous with the --try-reconnect option, which is now deprecated.

    此选项与--try reconnect选项同义,后者现在已被弃用。

    The default for this option this option differs from its default when used with other NDB programs. See Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”, for more information.

    此选项的默认值此选项与与其他ndb程序一起使用时的默认值不同。有关详细信息,请参阅21.4.32节,“ndb群集程序的公用选项-ndb群集程序的公用选项”。

  • --execute=command, -e command

    --执行=命令,-e命令

    Property Value
    Command-Line Format --execute=name

    This option can be used to send a command to the NDB Cluster management client from the system shell. For example, either of the following is equivalent to executing SHOW in the management client:

    此选项可用于从系统外壳向ndb群集管理客户端发送命令。例如,以下任一操作都相当于在管理客户端中执行show:

    shell> ndb_mgm -e "SHOW"
    
    shell> ndb_mgm --execute="SHOW"
    

    This is analogous to how the --execute or -e option works with the mysql command-line client. See Section 4.2.2.1, “Using Options on the Command Line”.

    这类似于--execute或-e选项如何与mysql命令行客户机一起工作。参见第4.2.2.1节“使用命令行上的选项”。

    Note

    If the management client command to be passed using this option contains any space characters, then the command must be enclosed in quotation marks. Either single or double quotation marks may be used. If the management client command contains no space characters, the quotation marks are optional.

    如果要使用此选项传递的管理客户端命令包含任何空格字符,则该命令必须用引号括起来。可以使用单引号或双引号。如果management client命令不包含空格字符,则引号是可选的。

  • --try-reconnect=number

    --try reconnect=号码

    Property Value
    Command-Line Format --try-reconnect=#
    Deprecated Yes
    Type (>= 5.7.10-ndb-7.5.0) Numeric
    Type Integer
    Default Value (>= 5.7.10-ndb-7.5.0) 12
    Default Value 3
    Minimum Value 0
    Maximum Value 4294967295

    If the connection to the management server is broken, the node tries to reconnect to it every 5 seconds until it succeeds. By using this option, it is possible to limit the number of attempts to number before giving up and reporting an error instead.

    如果到管理服务器的连接断开,则节点每5秒尝试重新连接一次,直到成功。通过使用此选项,可以在放弃并报告错误之前限制尝试次数。

    This option is deprecated and subject to removal in a future release. Use --connect-retries, instead.

    此选项已弃用,在以后的版本中可能会被删除。请改用--connect重试。

Additional information about using ndb_mgm can be found in Section 21.5.2, “Commands in the NDB Cluster Management Client”.

有关使用ndb-mgm的其他信息,请参阅第21.5.2节“ndb群集管理客户端中的命令”。

21.4.6 ndb_blob_tool — Check and Repair BLOB and TEXT columns of NDB Cluster Tables

This tool can be used to check for and remove orphaned BLOB column parts from NDB tables, as well as to generate a file listing any orphaned parts. It is sometimes useful in diagnosing and repairing corrupted or damaged NDB tables containing BLOB or TEXT columns.

此工具可用于检查并从ndb表中删除孤立的blob列部分,以及生成列出所有孤立部分的文件。它有时在诊断和修复包含blob或文本列的损坏或损坏的ndb表时非常有用。

The basic syntax for ndb_blob_tool is shown here:

ndb_blob_工具的基本语法如下所示:

ndb_blob_tool [options] table [column, ...]

Unless you use the --help option, you must specify an action to be performed by including one or more of the options --check-orphans, --delete-orphans, or --dump-file. These options cause ndb_blob_tool to check for orphaned BLOB parts, remove any orphaned BLOB parts, and generate a dump file listing orphaned BLOB parts, respectively, and are described in more detail later in this section.

除非使用--help选项,否则必须指定要执行的操作,包括一个或多个选项--check orphans、-delete orphans或--dump file。这些选项使ndb_blob_工具分别检查孤立blob部分、删除所有孤立blob部分和生成列出孤立blob部分的转储文件,并在本节后面的部分中详细介绍。

You must also specify the name of a table when invoking ndb_blob_tool. In addition, you can optionally follow the table name with the (comma-separated) names of one or more BLOB or TEXT columns from that table. If no columns are listed, the tool works on all of the table's BLOB and TEXT columns. If you need to specify a database, use the --database (-d) option.

调用ndb blob_工具时,还必须指定表的名称。此外,还可以选择在表名后面加上该表中一个或多个blob或文本列的(逗号分隔)名称。如果未列出列,则该工具适用于表的所有blob列和文本列。如果需要指定数据库,请使用--database(-d)选项。

The --verbose option provides additional information in the output about the tool's progress.

--verbose选项在输出中提供有关工具进度的附加信息。

The following table includes options that are specific to ndb_blob_tool. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_blob_tool), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb_blob_工具的选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndb blob工具),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.320 Command-line options for the ndb_blob_tool program

表21.320 ndb blob工具程序的命令行选项

Format Description Added, Deprecated, or Removed

--check-orphans

--检查孤儿

Check for orphan blob parts

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--database=db_name,

--数据库=数据库名称,

-d

-丁

Database to find the table in.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--delete-orphans

--删除孤立项

Delete orphan blob parts

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--dump-file=file

--dump file=文件

Write orphan keys to specified file

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--verbose,

--冗长,

-v

-五

Verbose output

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


  • --check-orphans

    --检查孤儿

    Property Value
    Command-Line Format --check-orphans
    Type Boolean
    Default Value FALSE

    Check for orphaned BLOB parts in NDB Cluster tables.

    检查ndb集群表中的孤立blob部分。

  • --database=db_name, -d

    --数据库=数据库名称,-d

    Property Value
    Command-Line Format --database=db_name
    Type String
    Default Value [none]

    Specify the database to find the table in.

    指定要在其中查找表的数据库。

  • --delete-orphans

    --删除孤立项

    Property Value
    Command-Line Format --delete-orphans
    Type Boolean
    Default Value FALSE

    Remove orphaned BLOB parts from NDB Cluster tables.

    从ndb集群表中删除孤立的blob部分。

  • --dump-file=file

    --dump file=文件

    Property Value
    Command-Line Format --dump-file=file
    Type File name
    Default Value [none]

    Writes a list of orphaned BLOB column parts to file. The information written to the file includes the table key and BLOB part number for each orphaned BLOB part.

    将孤立blob列部分的列表写入文件。写入文件的信息包括表键和每个孤立blob部分的blob部件号。

  • --verbose

    --冗长的

    Property Value
    Command-Line Format --verbose
    Type Boolean
    Default Value FALSE

    Provide extra information in the tool's output regarding its progress.

    在工具输出中提供有关其进度的额外信息。

Example

First we create an NDB table in the test database, using the CREATE TABLE statement shown here:

首先,我们在测试数据库中创建一个ndb表,使用下面显示的create table语句:

USE test;

CREATE TABLE btest (
    c0 BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
    c1 TEXT,
    c2 BLOB
)   ENGINE=NDB;

Then we insert a few rows into this table, using a series of statements similar to this one:

然后,我们使用与此类似的一系列语句将几行插入此表:

INSERT INTO btest VALUES (NULL, 'x', REPEAT('x', 1000));

When run with --check-orphans against this table, ndb_blob_tool generates the following output:

使用--check orphans for this table运行时,ndb_blob_tool将生成以下输出:

shell> ndb_blob_tool --check-orphans --verbose -d test btest
connected
processing 2 blobs
processing blob #0 c1 NDB$BLOB_19_1
NDB$BLOB_19_1: nextResult: res=1
total parts: 0
orphan parts: 0
processing blob #1 c2 NDB$BLOB_19_2
NDB$BLOB_19_2: nextResult: res=0
NDB$BLOB_19_2: nextResult: res=0
NDB$BLOB_19_2: nextResult: res=0
NDB$BLOB_19_2: nextResult: res=0
NDB$BLOB_19_2: nextResult: res=0
NDB$BLOB_19_2: nextResult: res=0
NDB$BLOB_19_2: nextResult: res=0
NDB$BLOB_19_2: nextResult: res=0
NDB$BLOB_19_2: nextResult: res=0
NDB$BLOB_19_2: nextResult: res=0
NDB$BLOB_19_2: nextResult: res=1
total parts: 10
orphan parts: 0
disconnected

NDBT_ProgramExit: 0 - OK

The tool reports that there are no NDB BLOB column parts associated with column c1, even though c1 is a TEXT column. This is due to the fact that, in an NDB table, only the first 256 bytes of a BLOB or TEXT column value are stored inline, and only the excess, if any, is stored separately; thus, if there are no values using more than 256 bytes in a given column of one of these types, no BLOB column parts are created by NDB for this column. See Section 11.8, “Data Type Storage Requirements”, for more information.

该工具报告没有与列c1关联的ndb blob列部分,即使c1是文本列。这是因为在ndb表中,只有blob或文本列值的前256个字节是内联存储的,只有多余的(如果有的话)是单独存储的;因此,如果在这些类型之一的给定列中没有使用超过256个字节的值,ndb就不会为此列创建blob列部分。有关更多信息,请参见第11.8节“数据类型存储要求”。

21.4.7 ndb_config — Extract NDB Cluster Configuration Information

This tool extracts current configuration information for data nodes, SQL nodes, and API nodes from one of a number of sources: an NDB Cluster management node, or its config.ini or my.cnf file. By default, the management node is the source for the configuration data; to override the default, execute ndb_config with the --config-file or --mycnf option. It is also possible to use a data node as the source by specifying its node ID with --config_from_node=node_id.

此工具从以下来源之一提取数据节点、SQL节点和API节点的当前配置信息:NDB群集管理节点或其config.ini或my.cnf文件。默认情况下,管理节点是配置数据的源;要覆盖默认值,请使用--config file或--mycnf选项执行ndb_config。也可以使用数据节点作为源,方法是使用--config_from_node=node_id指定其节点id。

ndb_config can also provide an offline dump of all configuration parameters which can be used, along with their default, maximum, and minimum values and other information. The dump can be produced in either text or XML format; for more information, see the discussion of the --configinfo and --xml options later in this section).

NdByCONFIG还可以提供所有可配置的配置参数的离线转储,以及它们的默认值、最大值和最小值以及其他信息。转储可以以文本或XML格式生成;有关更多信息,请参阅本节后面对--configinfo和--xml选项的讨论)。

You can filter the results by section (DB, SYSTEM, or CONNECTIONS) using one of the options --nodes, --system, or --connections.

可以使用以下选项之一按节(db、system或connections)筛选结果:nodes、-system或--connections。

The following table includes options that are specific to ndb_config. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_config), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb_config的选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndb_config),请参阅21.4.32节,“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.321 Command-line options for the ndb_config program

表21.321 ndb配置程序的命令行选项

Format Description Added, Deprecated, or Removed

--config-file=file_name

--config file=文件名

Set the path to config.ini file

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--config_from_node=#

--从节点配置=#

Obtain configuration data from the node having this ID (must be a data node).

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--configinfo

--配置信息

Dumps information about all NDB configuration parameters in text format with default, maximum, and minimum values. Use with --xml to obtain XML output.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--connections

--连接

Print connections information ([tcp], [tcp default], [shm], or [shm default] sections of cluster configuration file) only. Cannot be used with --system or --nodes.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--diff-default

--差异默认值

Print only configuration parameters that have non-default values

ADDED: NDB 7.5.7, NDB 7.6.3

增加:ndb 7.5.7,ndb 7.6.3

--fields=string,

--字段=字符串,

-f

-F型

Field separator

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--host=name

--主机=名称

Specify host

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--mycnf

--MyCNF公司

Read configuration data from my.cnf file

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--nodeid,

--诺德德,

--id

--身份证

Get configuration of node with this ID

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--nodes

--节点

Print node information ([ndbd] or [ndbd default] section of cluster configuration file) only. Cannot be used with --system or --connections.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-c

-C类

Short form for --ndb-connectstring

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--query=string,

--查询=字符串,

-q

-问

One or more query options (attributes)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--query-all,

--全部查询,

-a

-一个

Dumps all parameters and values to a single comma-delimited string.

ADDED: NDB 7.4.16, NDB 7.5.7

增加:ndb 7.4.16,ndb 7.5.7

--rows=string,

--行=字符串,

-r

-右

Row separator

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--system

--系统

Print SYSTEM section information only (see ndb_config --configinfo output). Cannot be used with --nodes or --connections.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--type=name

--type=名称

Specify node type

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--configinfo --xml

--configinfo—XML

Use --xml with --configinfo to obtain a dump of all NDB configuration parameters in XML format with default, maximum, and minimum values.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


  • --configinfo

    --配置信息

    The --configinfo option causes ndb_config to dump a list of each NDB Cluster configuration parameter supported by the NDB Cluster distribution of which ndb_config is a part, including the following information:

    --configinfo选项使ndb_config转储ndb群集分布所支持的每个ndb群集配置参数的列表,ndb_config是ndb群集分布的一部分,包括以下信息:

    • A brief description of each parameter's purpose, effects, and usage

      对每个参数的用途、效果和用法的简要描述

    • The section of the config.ini file where the parameter may be used

      config.ini文件中可以使用参数的部分

    • The parameter's data type or unit of measurement

      参数的数据类型或测量单位

    • Where applicable, the parameter's default, minimum, and maximum values

      在适用的情况下,参数的默认值、最小值和最大值

    • NDB Cluster release version and build information

      ndb群集版本和内部版本信息

    By default, this output is in text format. Part of this output is shown here:

    默认情况下,此输出为文本格式。部分输出如下所示:

    shell> ndb_config --configinfo
    
    ****** SYSTEM ******
    
    Name (String)
    Name of system (NDB Cluster)
    MANDATORY
    
    PrimaryMGMNode (Non-negative Integer)
    Node id of Primary ndb_mgmd(MGM) node
    Default: 0 (Min: 0, Max: 4294967039)
    
    ConfigGenerationNumber (Non-negative Integer)
    Configuration generation number
    Default: 0 (Min: 0, Max: 4294967039)
    
    ****** DB ******
    
    MaxNoOfSubscriptions (Non-negative Integer)
    Max no of subscriptions (default 0 == MaxNoOfTables)
    Default: 0 (Min: 0, Max: 4294967039)
    
    MaxNoOfSubscribers (Non-negative Integer)
    Max no of subscribers (default 0 == 2 * MaxNoOfTables)
    Default: 0 (Min: 0, Max: 4294967039)
    
    …
    

    Use this option together with the --xml option to obtain output in XML format.

    将此选项与--xml选项一起使用可获得XML格式的输出。

  • --config-file=path-to-file

    --config file=文件路径

    Property Value
    Command-Line Format --config-file=file_name
    Type File name
    Default Value

    Gives the path to the management server's configuration file (config.ini). This may be a relative or absolute path. If the management node resides on a different host from the one on which ndb_config is invoked, then an absolute path must be used.

    提供管理服务器配置文件(config.ini)的路径。这可能是一个相对的或绝对的路径。如果管理节点与调用ndb_config的主机位于不同的主机上,则必须使用绝对路径。

  • --config_from_node=#

    --从节点配置=#

    Property Value
    Command-Line Format --config-from-node=#
    Type Numeric
    Default Value none
    Minimum Value 1
    Maximum Value 48

    Obtain the cluster's configuration data from the data node that has this ID.

    从具有此ID的数据节点获取群集的配置数据。

    If the node having this ID is not a data node, ndb_config fails with an error. (To obtain configuration data from the management node instead, simply omit this option.)

    如果具有此ID的节点不是数据节点,则ndb_config将失败并出现错误。(要从管理节点获取配置数据,只需省略此选项。)

  • --connections

    --连接

    Property Value
    Command-Line Format --connections
    Type Boolean
    Default Value FALSE

    Tells ndb_config to print CONNECTIONS information only—that is, information about parameters found in the [tcp], [tcp default], [shm], or [shm default] sections of the cluster configuration file (see Section 21.3.3.10, “NDB Cluster TCP/IP Connections”, and Section 21.3.3.12, “NDB Cluster Shared Memory Connections”, for more information).

    告诉ndb_config仅打印连接信息,即有关在群集配置文件的[TCP]、[TCP默认值]、[SHM]或[SHM默认值]部分中找到的参数的信息(请参阅第21.3.3.10节“ndb群集TCP/IP连接”和第21.3.3.12节“ndb群集共享内存连接”,欲了解更多信息)。

    This option is mutually exclusive with --nodes and --system; only one of these 3 options can be used.

    此选项与--nodes和--system互斥;只能使用这3个选项中的一个。

  • --diff-default

    --差异默认值

    Property Value
    Command-Line Format --diff-default
    Introduced 5.7.18-ndb-7.6.3
    Type Boolean
    Default Value FALSE

    Print only configuration parameters that have non-default values.

    仅打印具有非默认值的配置参数。

  • --fields=delimiter, -f delimiter

    --字段=分隔符,-f分隔符

    Property Value
    Command-Line Format --fields=string
    Type String
    Default Value

    Specifies a delimiter string used to separate the fields in the result. The default is , (the comma character).

    指定用于分隔结果中字段的分隔符字符串。默认值为(逗号字符)。

    Note

    If the delimiter contains spaces or escapes (such as \n for the linefeed character), then it must be quoted.

    如果分隔符包含空格或转义符(例如换行符为\n),则必须将其引起来。

  • --host=hostname

    --主机=主机名

    Property Value
    Command-Line Format --host=name
    Type String
    Default Value

    Specifies the host name of the node for which configuration information is to be obtained.

    指定要获取其配置信息的节点的主机名。

    Note

    While the hostname localhost usually resolves to the IP address 127.0.0.1, this may not necessarily be true for all operating platforms and configurations. This means that it is possible, when localhost is used in config.ini, for ndb_config --host=localhost to fail if ndb_config is run on a different host where localhost resolves to a different address (for example, on some versions of SUSE Linux, this is 127.0.0.2). In general, for best results, you should use numeric IP addresses for all NDB Cluster configuration values relating to hosts, or verify that all NDB Cluster hosts handle localhost in the same fashion.

    虽然主机名localhost通常解析为IP地址127.0.0.1,但对于所有操作平台和配置来说,这未必都是正确的。这意味着,当在config.ini中使用localhost时,如果在本地主机解析为不同地址的不同主机(例如,在某些版本的suse linux上,这是127.0.0.2)上运行ndb_config--host=localhost,则可能会失败。一般来说,为了获得最佳结果,应该对所有与主机相关的ndb群集配置值使用数字ip地址,或者验证所有ndb群集主机以相同的方式处理本地主机。

  • --mycnf

    --MyCNF公司

    Property Value
    Command-Line Format --mycnf
    Type Boolean
    Default Value FALSE

    Read configuration data from the my.cnf file.

    从my.cnf文件中读取配置数据。

  • --ndb-connectstring=connection_string, -c connection_string

    --ndb connectstring=连接字符串,-c连接字符串

    Property Value
    Command-Line Format

    --ndb-connectstring=connectstring

    --ndb connectstring=连接字符串

    --connect-string=connectstring

    --connect string=连接字符串

    Type String
    Default Value localhost:1186

    Specifies the connection string to use in connecting to the management server. The format for the connection string is the same as described in Section 21.3.3.3, “NDB Cluster Connection Strings”, and defaults to localhost:1186.

    指定用于连接到管理服务器的连接字符串。连接字符串的格式与21.3.3.3节“ndb cluster connection strings”中描述的格式相同,默认为localhost:1186。

  • --nodeid=node_id

    --node id=节点ID

    Property Value
    Command-Line Format --ndb-nodeid=#
    Type Numeric
    Default Value 0

    Specify the node ID of the node for which configuration information is to be obtained. Formerly, --id could be used as a synonym for this option; in NDB 7.5 and later, the only form accepted is --nodeid.

    指定要获取其配置信息的节点的节点ID。以前,--id可以用作此选项的同义词;在ndb 7.5和更高版本中,唯一接受的表单是--nodeid。

  • --nodes

    --节点

    Property Value
    Command-Line Format --nodes
    Type Boolean
    Default Value FALSE

    Tells ndb_config to print information relating only to parameters defined in an [ndbd] or [ndbd default] section of the cluster configuration file (see Section 21.3.3.6, “Defining NDB Cluster Data Nodes”).

    告诉ndb_config仅打印与集群配置文件的[ndbd]或[ndbd default]部分中定义的参数相关的信息(请参阅第21.3.3.6节“定义ndb集群数据节点”)。

    This option is mutually exclusive with --connections and --system; only one of these 3 options can be used.

    此选项与--connections和--system互斥;只能使用这3个选项中的一个。

  • --query=query-options, -q query-options

    --query=查询选项,-q查询选项

    Property Value
    Command-Line Format --query=string
    Type String
    Default Value

    This is a comma-delimited list of query options—that is, a list of one or more node attributes to be returned. These include nodeid (node ID), type (node type—that is, ndbd, mysqld, or ndb_mgmd), and any configuration parameters whose values are to be obtained.

    这是一个逗号分隔的查询选项列表,即要返回的一个或多个节点属性的列表。这些参数包括node id(节点id)、type(节点类型,即ndbd、mysqld或ndb-mgmd)和要获取其值的任何配置参数。

    For example, --query=nodeid,type,datamemory,datadir returns the node ID, node type, DataMemory, and DataDir for each node.

    例如,--query=node id,type,datamemory,datadir返回每个节点的节点id、节点类型、datamemory和datadir。

    Formerly, id was accepted as a synonym for nodeid, but has been removed in NDB 7.5 and later.

    以前,id被接受为nodeid的同义词,但在ndb 7.5和更高版本中已被删除。

    Note

    If a given parameter is not applicable to a certain type of node, than an empty string is returned for the corresponding value. See the examples later in this section for more information.

    如果给定的参数不适用于特定类型的节点,则会为相应的值返回空字符串。有关更多信息,请参阅本节后面的示例。

  • --query-all, -a

    --全部查询,-a

    Property Value
    Command-Line Format --query-all
    Introduced 5.7.18-ndb-7.5.7
    Type String
    Default Value

    Returns a comma-delimited list of all query options (node attributes; note that this list is a single string.

    返回所有查询选项(节点属性)的逗号分隔列表;请注意,此列表是单个字符串。

    This option was introduced in NDB 7.5.7 (Bug #60095, Bug #11766869).

    此选项在ndb 7.5.7中引入(错误60095,错误11766869)。

  • --rows=separator, -r separator

    --行=分隔符,-r分隔符

    Property Value
    Command-Line Format --rows=string
    Type String
    Default Value

    Specifies a separator string used to separate the rows in the result. The default is a space character.

    指定用于分隔结果中的行的分隔符字符串。默认为空格字符。

    Note

    If the separator contains spaces or escapes (such as \n for the linefeed character), then it must be quoted.

    如果分隔符包含空格或转义符(例如换行符为\n),则必须将其引起来。

  • --system

    --系统

    Property Value
    Command-Line Format --system
    Type Boolean
    Default Value FALSE

    Tells ndb_config to print SYSTEM information only. This consists of system variables that cannot be changed at run time; thus, there is no corresponding section of the cluster configuration file for them. They can be seen (prefixed with ****** SYSTEM ******) in the output of ndb_config --configinfo.

    告诉ndb_config仅打印系统信息。这包括在运行时无法更改的系统变量;因此,在集群配置文件中没有相应的部分可供它们使用。它们可以在ndb_config--configinfo的输出中看到(前缀为*****system*****)。

    This option is mutually exclusive with --nodes and --connections; only one of these 3 options can be used.

    此选项与--nodes和--connections互斥;只能使用这3个选项中的一个。

  • --type=node_type

    --type=节点类型

    Property Value
    Command-Line Format --type=name
    Type Enumeration
    Default Value [none]
    Valid Values

    ndbd

    ndbd公司

    mysqld

    mysqld公司

    ndb_mgmd

    国家开发银行

    Filters results so that only configuration values applying to nodes of the specified node_type (ndbd, mysqld, or ndb_mgmd) are returned.

    筛选结果,以便仅返回应用于指定节点类型(ndbd、mysqld或ndb-mgmd)的节点的配置值。

  • --usage, --help, or -?

    --用法,--帮助,或-?

    Property Value
    Command-Line Format

    --help

    --帮助

    --usage

    --用法

    Causes ndb_config to print a list of available options, and then exit.

    导致NdByCOnFIG打印可用选项列表,然后退出。

  • --version, -V

    --版本,-v

    Property Value
    Command-Line Format --version

    Causes ndb_config to print a version information string, and then exit.

    导致NdByCOnFIG打印版本信息字符串,然后退出。

  • --configinfo --xml

    --configinfo—XML

    Property Value
    Command-Line Format --configinfo --xml
    Type Boolean
    Default Value false

    Cause ndb_config --configinfo to provide output as XML by adding this option. A portion of such output is shown in this example:

    通过添加此选项,使ndb_config--configinfo以XML形式提供输出。此示例中显示了此类输出的一部分:

    shell> ndb_config --configinfo --xml
    
    <configvariables protocolversion="1" ndbversionstring="5.7.28-ndb-7.5.16"
                        ndbversion="460032" ndbversionmajor="7" ndbversionminor="5"
                        ndbversionbuild="0">
      <section name="SYSTEM">
        <param name="Name" comment="Name of system (NDB Cluster)" type="string"
                  mandatory="true"/>
        <param name="PrimaryMGMNode" comment="Node id of Primary ndb_mgmd(MGM) node"
                  type="unsigned" default="0" min="0" max="4294967039"/>
        <param name="ConfigGenerationNumber" comment="Configuration generation number"
                  type="unsigned" default="0" min="0" max="4294967039"/>
      </section>
      <section name="MYSQLD" primarykeys="NodeId">
        <param name="wan" comment="Use WAN TCP setting as default" type="bool"
                  default="false"/>
        <param name="HostName" comment="Name of computer for this node"
                  type="string" default=""/>
        <param name="Id" comment="NodeId" type="unsigned" mandatory="true"
                  min="1" max="255" deprecated="true"/>
        <param name="NodeId" comment="Number identifying application node (mysqld(API))"
                  type="unsigned" mandatory="true" min="1" max="255"/>
        <param name="ExecuteOnComputer" comment="HostName" type="string"
                  deprecated="true"/>
    
        …
    
      </section>
    
      …
    
    </configvariables>
    
    Note

    Normally, the XML output produced by ndb_config --configinfo --xml is formatted using one line per element; we have added extra whitespace in the previous example, as well as the next one, for reasons of legibility. This should not make any difference to applications using this output, since most XML processors either ignore nonessential whitespace as a matter of course, or can be instructed to do so.

    通常,ndb-config--configinfo--xml生成的xml输出使用每个元素一行的格式;出于易读性的原因,我们在上一个示例和下一个示例中都添加了额外的空格。这不会对使用此输出的应用程序产生任何影响,因为大多数XML处理器要么忽略非必要的空白,要么会被指示这样做。

    The XML output also indicates when changing a given parameter requires that data nodes be restarted using the --initial option. This is shown by the presence of an initial="true" attribute in the corresponding <param> element. In addition, the restart type (system or node) is also shown; if a given parameter requires a system restart, this is indicated by the presence of a restart="system" attribute in the corresponding <param> element. For example, changing the value set for the Diskless parameter requires a system initial restart, as shown here (with the restart and initial attributes highlighted for visibility):

    XML输出还指示更改给定参数时需要使用--initial选项重新启动数据节点。这通过在相应的元素中存在initial=“true”属性来显示。此外,还会显示重新启动类型(系统或节点);如果给定的参数需要重新启动系统,则在相应的元素中会显示restart=“system”属性。例如,更改无盘参数的值集需要系统初始重新启动,如下所示(重新启动和初始属性突出显示以便查看):

    <param name="Diskless" comment="Run wo/ disk" type="bool" default="false"
              restart="system" initial="true"/>
    

    Currently, no initial attribute is included in the XML output for <param> elements corresponding to parameters which do not require initial restarts; in other words, initial="false" is the default, and the value false should be assumed if the attribute is not present. Similarly, the default restart type is node (that is, an online or rolling restart of the cluster), but the restart attribute is included only if the restart type is system (meaning that all cluster nodes must be shut down at the same time, then restarted).

    目前,与不需要初始重新启动的参数相对应的元素的xml输出中不包含初始属性;换句话说,initial=“false”是默认值,如果属性不存在,则应假定值false。类似地,默认的重新启动类型是node(即群集的联机或“滚动”重新启动),但仅当重新启动类型是system(意味着必须同时关闭所有群集节点,然后重新启动)时,才会包含restart属性。

    Deprecated parameters are indicated in the XML output by the deprecated attribute, as shown here:

    已弃用的参数在XML输出中由已弃用的属性指示,如下所示:

    <param name="NoOfDiskPagesToDiskAfterRestartACC" comment="DiskCheckpointSpeed"
           type="unsigned" default="20" min="1" max="4294967039" deprecated="true"/>
    

    In such cases, the comment refers to one or more parameters that supersede the deprecated parameter. Similarly to initial, the deprecated attribute is indicated only when the parameter is deprecated, with deprecated="true", and does not appear at all for parameters which are not deprecated. (Bug #21127135)

    在这种情况下,注释引用一个或多个参数,这些参数将取代不推荐使用的参数。与initial类似,deprecated属性仅在参数已被弃用时指示,且deprecated=“true”,对于未被弃用的参数根本不显示。(错误21127135)

    Beginning with NDB 7.5.0, parameters that are required are indicated with mandatory="true", as shown here:

    从ndb 7.5.0开始,所需的参数用mandatory=“true”表示,如下所示:

    <param name="NodeId"
              comment="Number identifying application node (mysqld(API))"
              type="unsigned" mandatory="true" min="1" max="255"/>
    

    In much the same way that the initial or deprecated attribute is displayed only for a parameter that requires an intial restart or that is deprecated, the mandatory attribute is included only if the given parameter is actually required.

    与只为需要初始重新启动或已弃用的参数显示initial或deprecated属性的方式大致相同,只有在实际需要给定参数时,才会包含强制性属性。

    Important

    The --xml option can be used only with the --configinfo option. Using --xml without --configinfo fails with an error.

    --xml选项只能与--configinfo选项一起使用。使用--xml而不使用--configinfo失败,出现错误。

    Unlike the options used with this program to obtain current configuration data, --configinfo and --xml use information obtained from the NDB Cluster sources when ndb_config was compiled. For this reason, no connection to a running NDB Cluster or access to a config.ini or my.cnf file is required for these two options.

    与此程序用于获取当前配置数据的选项不同,-configinfo和--xml使用在编译ndb配置时从ndb群集源获取的信息。因此,这两个选项不需要连接到正在运行的ndb群集或访问config.ini或my.cnf文件。

Combining other ndb_config options (such as --query or --type) with --configinfo (with or without the --xml option is not supported. Currently, if you attempt to do so, the usual result is that all other options besides --configinfo or --xml are simply ignored. However, this behavior is not guaranteed and is subject to change at any time. In addition, since ndb_config, when used with the --configinfo option, does not access the NDB Cluster or read any files, trying to specify additional options such as --ndb-connectstring or --config-file with --configinfo serves no purpose.

不支持将其他ndb_配置选项(例如--query或--type)与--configinfo(带或不带--xml选项)组合使用。目前,如果您尝试这样做,通常的结果是除了--configinfo或--xml之外的所有其他选项都被忽略。但是,这种行为并不能得到保证,随时可能发生变化。此外,由于ndb_config与--configinfo选项一起使用时,不会访问ndb群集或读取任何文件,因此尝试指定其他选项(如--ndb connectstring或--config file with--configinfo)没有任何用处。

Examples

  1. To obtain the node ID and type of each node in the cluster:

    要获取群集中每个节点的节点ID和类型,请执行以下操作:

    shell> ./ndb_config --query=nodeid,type --fields=':' --rows='\n'
    1:ndbd
    2:ndbd
    3:ndbd
    4:ndbd
    5:ndb_mgmd
    6:mysqld
    7:mysqld
    8:mysqld
    9:mysqld
    

    In this example, we used the --fields options to separate the ID and type of each node with a colon character (:), and the --rows options to place the values for each node on a new line in the output.

    在本例中,我们使用--fields选项用冒号(:)分隔每个节点的id和类型,使用--rows选项将每个节点的值放在输出的新行上。

  2. To produce a connection string that can be used by data, SQL, and API nodes to connect to the management server:

    要生成可由数据、SQL和API节点用于连接到管理服务器的连接字符串,请执行以下操作:

    shell> ./ndb_config --config-file=usr/local/mysql/cluster-data/config.ini \
    --query=hostname,portnumber --fields=: --rows=, --type=ndb_mgmd
    198.51.100.179:1186
    
  3. This invocation of ndb_config checks only data nodes (using the --type option), and shows the values for each node's ID and host name, as well as the values set for its DataMemory and DataDir parameters:

    此调用ndb_config只检查数据节点(使用--type选项),并显示每个节点的id和主机名的值,以及为其datamemory和datadir参数设置的值:

    shell> ./ndb_config --type=ndbd --query=nodeid,host,datamemory,datadir -f ' : ' -r '\n'
    1 : 198.51.100.193 : 83886080 : /usr/local/mysql/cluster-data
    2 : 198.51.100.112 : 83886080 : /usr/local/mysql/cluster-data
    3 : 198.51.100.176 : 83886080 : /usr/local/mysql/cluster-data
    4 : 198.51.100.119 : 83886080 : /usr/local/mysql/cluster-data
    

    In this example, we used the short options -f and -r for setting the field delimiter and row separator, respectively, as well as the short option -q to pass a list of parameters to be obtained.

    在本例中,我们分别使用short选项-f和-r设置字段分隔符和行分隔符,并使用short选项-q传递要获取的参数列表。

  4. To exclude results from any host except one in particular, use the --host option:

    要从任何主机(特定主机除外)中排除结果,请使用--host选项:

    shell> ./ndb_config --host=198.51.100.176 -f : -r '\n' -q id,type
    3:ndbd
    5:ndb_mgmd
    

    In this example, we also used the short form -q to determine the attributes to be queried.

    在本例中,我们还使用了缩写-q来确定要查询的属性。

    Similarly, you can limit results to a node with a specific ID using the --nodeid option.

    类似地,可以使用--node id选项将结果限制为具有特定ID的节点。

21.4.8 ndb_cpcd — Automate Testing for NDB Development

A utility having this name was formerly part of an internal automated test framework used in testing and debugging NDB Cluster. It is no longer included in NDB Cluster distributions provided by Oracle.

具有此名称的实用程序以前是用于测试和调试ndb集群的内部自动测试框架的一部分。它不再包含在Oracle提供的ndb集群发行版中。

21.4.9 ndb_delete_all — Delete All Rows from an NDB Table

ndb_delete_all deletes all rows from the given NDB table. In some cases, this can be much faster than DELETE or even TRUNCATE TABLE.

ndb_delete_all删除给定ndb表中的所有行。在某些情况下,这可能比删除甚至截断表快得多。

Usage

ndb_delete_all -c connection_string tbl_name -d db_name

This deletes all rows from the table named tbl_name in the database named db_name. It is exactly equivalent to executing TRUNCATE db_name.tbl_name in MySQL.

这将从数据库中名为db_name的tbl_name表中删除所有行。它完全等同于在mysql中执行truncate db_name.tbl_name。

The following table includes options that are specific to ndb_delete_all. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_delete_all), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb_delete_all的选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndb_delete_all),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.322 Command-line options for the ndb_delete_all program

表21.322 ndb_delete_all程序的命令行选项

Format Description Added, Deprecated, or Removed

--database=dbname,

--数据库=dbname,

-d

-丁

Name of the database in which the table is found

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--transactional,

--事务性的,

-t

-T型

Perform the delete in a single transaction (may run out of operations)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--tupscan

--图普斯卡

Run tup scan

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--diskscan

--磁盘扫描

Run disk scan

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


  • --transactional, -t

    --事务性,-t

    Use of this option causes the delete operation to be performed as a single transaction.

    使用此选项将导致删除操作作为单个事务执行。

    Warning

    With very large tables, using this option may cause the number of operations available to the cluster to be exceeded.

    对于非常大的表,使用此选项可能会导致超出群集可用的操作数。

21.4.10 ndb_desc — Describe NDB Tables

ndb_desc provides a detailed description of one or more NDB tables.

ndb_desc提供一个或多个ndb表的详细说明。

Usage

ndb_desc -c connection_string tbl_name -d db_name [options]

ndb_desc -c connection_string index_name -d db_name -t tbl_name

Additional options that can be used with ndb_desc are listed later in this section.

可与ndb_desc一起使用的其他选项将在本节后面列出。

Sample Output

MySQL table creation and population statements:

mysql表创建和填充语句:

USE test;

CREATE TABLE fish (
    id INT(11) NOT NULL AUTO_INCREMENT,
    name VARCHAR(20) NOT NULL,
    length_mm INT(11) NOT NULL,
    weight_gm INT(11) NOT NULL,

    PRIMARY KEY pk (id),
    UNIQUE KEY uk (name)
) ENGINE=NDB;

INSERT INTO fish VALUES
    (NULL, 'guppy', 35, 2), (NULL, 'tuna', 2500, 150000),
    (NULL, 'shark', 3000, 110000), (NULL, 'manta ray', 1500, 50000),
    (NULL, 'grouper', 900, 125000), (NULL ,'puffer', 250, 2500);

Output from ndb_desc:

从ndb_desc输出:

shell> ./ndb_desc -c localhost fish -d test -p
-- fish --
Version: 2
Fragment type: HashMapPartition
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 4
Number of primary keys: 1
Length of frm data: 337
Max Rows: 0
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
PartitionCount: 2
FragmentCount: 2
PartitionBalance: FOR_RP_BY_LDM
ExtraRowGciBits: 0
ExtraRowAuthorBits: 0
TableStatus: Retrieved
Table options:
HashMap: DEFAULT-HASHMAP-3840-2
-- Attributes --
id Int PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
name Varchar(20;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY DYNAMIC
length_mm Int NOT NULL AT=FIXED ST=MEMORY DYNAMIC
weight_gm Int NOT NULL AT=FIXED ST=MEMORY DYNAMIC
-- Indexes --
PRIMARY KEY(id) - UniqueHashIndex
PRIMARY(id) - OrderedIndex
uk(name) - OrderedIndex
uk$unique(name) - UniqueHashIndex
-- Per partition info --
Partition       Row count       Commit count    Frag fixed memory       Frag varsized memory    Extent_space    Free extent_space
0               2               2               32768                   32768                   0               0
1               4               4               32768                   32768                   0               0


NDBT_ProgramExit: 0 - OK

Information about multiple tables can be obtained in a single invocation of ndb_desc by using their names, separated by spaces. All of the tables must be in the same database.

通过使用多个表的名称(用空格分隔),可以在一次调用ndb_desc中获取有关多个表的信息。所有表必须在同一数据库中。

You can obtain additional information about a specific index using the --table (short form: -t) option and supplying the name of the index as the first argument to ndb_desc, as shown here:

可以使用--table(缩写:-t)选项获取有关特定索引的其他信息,并将索引名称作为ndb_desc的第一个参数提供,如下所示:

shell> ./ndb_desc uk -d test -t fish
-- uk --
Version: 2
Base table: fish
Number of attributes: 1
Logging: 0
Index type: OrderedIndex
Index status: Retrieved
-- Attributes --
name Varchar(20;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
-- IndexTable 10/uk --
Version: 2
Fragment type: FragUndefined
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: yes
Number of attributes: 2
Number of primary keys: 1
Length of frm data: 0
Max Rows: 0
Row Checksum: 1
Row GCI: 1
SingleUserMode: 2
ForceVarPart: 0
PartitionCount: 2
FragmentCount: 2
FragmentCountType: ONE_PER_LDM_PER_NODE
ExtraRowGciBits: 0
ExtraRowAuthorBits: 0
TableStatus: Retrieved
Table options:
-- Attributes --
name Varchar(20;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
NDB$TNODE Unsigned [64] PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY
-- Indexes --
PRIMARY KEY(NDB$TNODE) - UniqueHashIndex

NDBT_ProgramExit: 0 - OK

When an index is specified in this way, the --extra-partition-info and --extra-node-info options have no effect.

以这种方式指定索引时,--extra partition info和--extra node info选项无效。

The Version column in the output contains the table's schema object version. For information about interpreting this value, see NDB Schema Object Versions.

输出中的version列包含表的架构对象版本。有关解释此值的信息,请参见ndb schema对象版本。

Three of the table properties that can be set using NDB_TABLE comments embedded in CREATE TABLE and ALTER TABLE statements are also visible in ndb_desc output. The table's FRAGMENT_COUNT_TYPE is always shown in the FragmentCountType column. READ_ONLY and FULLY_REPLICATED, if set to 1, are shown in the Table options column. You can see this after executing the following ALTER TABLE statement in the mysql client:

可以使用嵌入在create table和alter table语句中的ndb_table注释设置的三个表属性在ndb_desc输出中也可见。表的fragment count类型始终显示在fragment count type列中。只读和完全复制(如果设置为1)显示在“表选项”列中。在mysql客户机中执行以下alter table语句后可以看到这一点:

mysql> ALTER TABLE fish COMMENT='NDB_TABLE=READ_ONLY=1,FULLY_REPLICATED=1';
1 row in set, 1 warning (0.00 sec)

mysql> SHOW WARNINGS\G
+---------+------+---------------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                                 |
+---------+------+---------------------------------------------------------------------------------------------------------+
| Warning | 1296 | Got error 4503 'Table property is FRAGMENT_COUNT_TYPE=ONE_PER_LDM_PER_NODE but not in comment' from NDB |
+---------+------+---------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

The warning is issued because READ_ONLY=1 requires that the table's fragment count type is (or be set to) ONE_PER_LDM_PER_NODE_GROUP; NDB sets this automatically in such cases. You can check that the ALTER TABLE statement has the desired effect using SHOW CREATE TABLE:

发出警告的原因是read_only=1要求表的片段计数类型为(或设置为)one_per_ldm_per_node_group;在这种情况下,ndb会自动设置此值。您可以使用show create table检查alter table语句是否具有所需的效果:

mysql> SHOW CREATE TABLE fish\G
*************************** 1. row ***************************
       Table: fish
Create Table: CREATE TABLE `fish` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(20) NOT NULL,
  `length_mm` int(11) NOT NULL,
  `weight_gm` int(11) NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk` (`name`)
) ENGINE=ndbcluster DEFAULT CHARSET=latin1
COMMENT='NDB_TABLE=READ_BACKUP=1,FULLY_REPLICATED=1'
1 row in set (0.01 sec)

Because FRAGMENT_COUNT_TYPE was not set explicitly, its value is not shown in the comment text printed by SHOW CREATE TABLE. ndb_desc, however, displays the updated value for this attribute. The Table options column shows the binary properties just enabled. You can see this in the output shown here (emphasized text):

由于片段计数类型未显式设置,因此其值不会显示在show create table打印的注释文本中。但是,ndb_desc显示此属性的更新值。table options列显示刚刚启用的二进制属性。您可以在这里显示的输出(强调文本)中看到这一点:

shell> ./ndb_desc -c localhost fish -d test -p
-- fish --
Version: 4
Fragment type: HashMapPartition
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 4
Number of primary keys: 1
Length of frm data: 380
Max Rows: 0
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
PartitionCount: 1
FragmentCount: 1
FragmentCountType: ONE_PER_LDM_PER_NODE_GROUP
ExtraRowGciBits: 0
ExtraRowAuthorBits: 0
TableStatus: Retrieved
Table options: readbackup, fullyreplicated
HashMap: DEFAULT-HASHMAP-3840-1
-- Attributes --
id Int PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
name Varchar(20;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY DYNAMIC
length_mm Int NOT NULL AT=FIXED ST=MEMORY DYNAMIC
weight_gm Int NOT NULL AT=FIXED ST=MEMORY DYNAMIC
-- Indexes --
PRIMARY KEY(id) - UniqueHashIndex
PRIMARY(id) - OrderedIndex
uk(name) - OrderedIndex
uk$unique(name) - UniqueHashIndex
-- Per partition info --
Partition       Row count       Commit count    Frag fixed memory       Frag varsized memory    Extent_space    Free extent_space

NDBT_ProgramExit: 0 - OK

For more information about these table properties, see Section 13.1.18.10, “Setting NDB_TABLE Options”.

有关这些表属性的更多信息,请参阅第13.1.18.10节“设置ndb_表选项”。

The Extent_space and Free extent_space columns are applicable only to NDB tables having columns on disk; for tables having only in-memory columns, these columns always contain the value 0.

extent_space和free extent_space列仅适用于具有磁盘列的ndb表;对于仅具有内存列的表,这些列始终包含值0。

To illustrate their use, we modify the previous example. First, we must create the necessary Disk Data objects, as shown here:

为了说明它们的用途,我们修改了前面的示例。首先,我们必须创建必要的磁盘数据对象,如下所示:

CREATE LOGFILE GROUP lg_1
    ADD UNDOFILE 'undo_1.log'
    INITIAL_SIZE 16M
    UNDO_BUFFER_SIZE 2M
    ENGINE NDB;

ALTER LOGFILE GROUP lg_1
    ADD UNDOFILE 'undo_2.log'
    INITIAL_SIZE 12M
    ENGINE NDB;

CREATE TABLESPACE ts_1
    ADD DATAFILE 'data_1.dat'
    USE LOGFILE GROUP lg_1
    INITIAL_SIZE 32M
    ENGINE NDB;

ALTER TABLESPACE ts_1
    ADD DATAFILE 'data_2.dat'
    INITIAL_SIZE 48M
    ENGINE NDB;

(For more information on the statements just shown and the objects created by them, see Section 21.5.13.1, “NDB Cluster Disk Data Objects”, as well as Section 13.1.15, “CREATE LOGFILE GROUP Syntax”, and Section 13.1.19, “CREATE TABLESPACE Syntax”.)

(有关刚才显示的语句及其创建的对象的更多信息,请参阅21.5.13.1节,“ndb群集磁盘数据对象”,以及13.1.15节,“创建日志文件组语法”和13.1.19节,“创建表空间语法”。)

Now we can create and populate a version of the fish table that stores 2 of its columns on disk (deleting the previous version of the table first, if it already exists):

现在我们可以创建并填充一张FISH表的版本,它存储了磁盘上的2列(如果已经存在的话,首先删除表的前一个版本):

CREATE TABLE fish (
    id INT(11) NOT NULL AUTO_INCREMENT,
    name VARCHAR(20) NOT NULL,
    length_mm INT(11) NOT NULL,
    weight_gm INT(11) NOT NULL,

    PRIMARY KEY pk (id),
    UNIQUE KEY uk (name)
) TABLESPACE ts_1 STORAGE DISK
ENGINE=NDB;

INSERT INTO fish VALUES
    (NULL, 'guppy', 35, 2), (NULL, 'tuna', 2500, 150000),
    (NULL, 'shark', 3000, 110000), (NULL, 'manta ray', 1500, 50000),
    (NULL, 'grouper', 900, 125000), (NULL ,'puffer', 250, 2500);

When run against this version of the table, ndb_desc displays the following output:

对该版本的表运行时,ndb_desc将显示以下输出:

shell> ./ndb_desc -c localhost fish -d test -p
-- fish --
Version: 1
Fragment type: HashMapPartition
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 4
Number of primary keys: 1
Length of frm data: 346
Max Rows: 0
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
PartitionCount: 2
FragmentCount: 2
FragmentCountType: ONE_PER_LDM_PER_NODE
ExtraRowGciBits: 0
ExtraRowAuthorBits: 0
TableStatus: Retrieved
Table options:
HashMap: DEFAULT-HASHMAP-3840-2
-- Attributes --
id Int PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
name Varchar(20;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
length_mm Int NOT NULL AT=FIXED ST=DISK
weight_gm Int NOT NULL AT=FIXED ST=DISK
-- Indexes --
PRIMARY KEY(id) - UniqueHashIndex
PRIMARY(id) - OrderedIndex
uk(name) - OrderedIndex
uk$unique(name) - UniqueHashIndex
-- Per partition info --
Partition       Row count       Commit count    Frag fixed memory       Frag varsized memory    Extent_space    Free extent_space
0               2               2               32768                   32768                   1048576         1044440
1               4               4               32768                   32768                   1048576         1044400


NDBT_ProgramExit: 0 - OK

This means that 1048576 bytes are allocated from the tablespace for this table on each partition, of which 1044440 bytes remain free for additional storage. In other words, 1048576 - 1044440 = 4136 bytes per partition is currently being used to store the data from this table's disk-based columns. The number of bytes shown as Free extent_space is available for storing on-disk column data from the fish table only; for this reason, it is not visible when selecting from the INFORMATION_SCHEMA.FILES table.

这意味着1048576字节是从每个分区上的表空间分配给这个表的,其中1044440字节仍然是空闲的,可以进行额外的存储。换句话说,每个分区1048576-1044440=4136字节当前用于存储来自此表基于磁盘的列的数据。显示为可用扩展盘区空间的字节数仅可用于存储fish表中的磁盘列数据;因此,从information\schema.files表中进行选择时,它不可见。

For fully replicated tables, ndb_desc shows only the nodes holding primary partition fragment replicas; nodes with copy fragment replicas (only) are ignored. Beginning with NDB 7.5.4, you can obtain such information, using the mysql client, from the table_distribution_status, table_fragments, table_info, and table_replicas tables in the ndbinfo database.

对于完全复制的表,ndb_desc仅显示包含主分区片段副本的节点;包含复制片段副本(仅)的节点将被忽略。从ndb 7.5.4开始,您可以使用mysql客户端从ndb info数据库的table_distribution_status、table_fragments、table_info和table_replicas表中获取这些信息。

The following table includes options that are specific to ndb_desc. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_desc), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb_desc的选项。下表后面是其他说明。有关大多数ndb群集程序(包括ndb_desc)的公用选项,请参阅21.4.32节,“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.323 Command-line options for the ndb_desc program

表21.323 ndb_desc程序的命令行选项

Format Description Added, Deprecated, or Removed

--blob-info,

--Blob信息,

-b

-乙

Include partition information for BLOB tables in output. Requires that the -p option also be used

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--database=dbname,

--数据库=dbname,

-d

-丁

Name of database containing table

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--extra-node-info,

--额外节点信息,

-n

-n个

Include partition-to-data-node mappings in output. Requires that the -p option also be used

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--extra-partition-info,

--额外的分区信息,

-p

-第页

Display information about partitions

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--retries=#,

--重试次数=,

-r

-右

Number of times to retry the connection (once per second)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--table=tbl_name,

--表=待定名称,

-t

-T型

Specify the table in which to find an index. When this option is used, -p and -n have no effect and are ignored.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--unqualified,

--不合格,

-u

-U型

Use unqualified table names

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


  • --blob-info, -b

    --blob信息,-b

    Include information about subordinate BLOB and TEXT columns.

    包括有关从属blob和文本列的信息。

    Use of this option also requires the use of the --extra-partition-info (-p) option.

    使用此选项还需要使用--extra partition info(-p)选项。

  • --database=db_name, -d

    --数据库=数据库名称,-d

    Specify the database in which the table should be found.

    指定应在其中找到表的数据库。

  • --extra-node-info, -n

    --额外节点信息,-n

    Include information about the mappings between table partitions and the data nodes upon which they reside. This information can be useful for verifying distribution awareness mechanisms and supporting more efficient application access to the data stored in NDB Cluster.

    包括有关表分区与其所在数据节点之间映射的信息。这些信息对于验证分发感知机制和支持更有效的应用程序访问ndb集群中存储的数据非常有用。

    Use of this option also requires the use of the --extra-partition-info (-p) option.

    使用此选项还需要使用--extra partition info(-p)选项。

  • --extra-partition-info, -p

    --额外的分区信息,-p

    Print additional information about the table's partitions.

    打印有关表分区的其他信息。

  • --retries=#, -r

    --重试次数=,-r

    Try to connect this many times before giving up. One connect attempt is made per second.

    在放弃之前,试着联系多次。每秒进行一次连接尝试。

  • --table=tbl_name, -t

    --表=tbl_name,-t

    Specify the table in which to look for an index.

    指定要在其中查找索引的表。

  • --unqualified, -u

    --不合格,-u

    Use unqualified table names.

    使用非限定表名。

In NDB 7.5.3 and later, table indexes listed in the output are ordered by ID. Previously, this was not deterministic and could vary between platforms. (Bug #81763, Bug #23547742)

在ndb 7.5.3和更高版本中,输出中列出的表索引是按id排序的。以前,这是不确定的,并且可能在不同的平台之间有所不同。(错误81763,错误23547742)

21.4.11 ndb_drop_index — Drop Index from an NDB Table

ndb_drop_index drops the specified index from an NDB table. It is recommended that you use this utility only as an example for writing NDB API applications—see the Warning later in this section for details.

ndb_drop_index从ndb表中删除指定的索引。建议仅将此实用程序用作编写ndb api应用程序的示例。有关详细信息,请参阅本节后面的警告。

Usage

ndb_drop_index -c connection_string table_name index -d db_name

The statement shown above drops the index named index from the table in the database.

上面的语句从数据库的表中删除名为index的索引。

The following table includes options that are specific to ndb_drop_index. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_drop_index), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb_drop_index的选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndb_drop_index),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.324 Command-line options for the ndb_drop_index program

表21.324 ndb_drop_index程序的命令行选项

Format Description Added, Deprecated, or Removed

--database=dbname,

--数据库=dbname,

-d

-丁

Name of the database in which the table is found

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


Warning

Operations performed on Cluster table indexes using the NDB API are not visible to MySQL and make the table unusable by a MySQL server. If you use this program to drop an index, then try to access the table from an SQL node, an error results, as shown here:

使用ndb api对集群表索引执行的操作对mysql不可见,这使得mysql服务器无法使用该表。如果使用此程序删除索引,然后尝试从SQL节点访问表,则会出现错误,如下所示:

shell> ./ndb_drop_index -c localhost dogs ix -d ctest1
Dropping index dogs/idx...OK

NDBT_ProgramExit: 0 - OK

shell> ./mysql -u jon -p ctest1
Enter password: *******
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 7 to server version: 5.7.28-ndb-7.5.16

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> SHOW TABLES;
+------------------+
| Tables_in_ctest1 |
+------------------+
| a                |
| bt1              |
| bt2              |
| dogs             |
| employees        |
| fish             |
+------------------+
6 rows in set (0.00 sec)

mysql> SELECT * FROM dogs;
ERROR 1296 (HY000): Got error 4243 'Index not found' from NDBCLUSTER

In such a case, your only option for making the table available to MySQL again is to drop the table and re-create it. You can use either the SQL statementDROP TABLE or the ndb_drop_table utility (see Section 21.4.12, “ndb_drop_table — Drop an NDB Table”) to drop the table.

在这种情况下,再次使表对mysql可用的唯一选项是删除表并重新创建它。可以使用sql语句drop table或ndb_drop_table实用程序(请参阅第21.4.12节“ndb_drop_table-drop an ndb table”)来删除表。

21.4.12 ndb_drop_table — Drop an NDB Table

ndb_drop_table drops the specified NDB table. (If you try to use this on a table created with a storage engine other than NDB, the attempt fails with the error 723: No such table exists.) This operation is extremely fast; in some cases, it can be an order of magnitude faster than using a MySQL DROP TABLE statement on an NDB table.

ndb_drop_table删除指定的ndb table。(如果您试图在一个用NDB以外的存储引擎创建的表上使用,则尝试失败,错误723:没有这样的表存在)。这个操作非常快;在某些情况下,它可以比使用NDB表上的MySQL下拉表快一个数量级。

Usage

ndb_drop_table -c connection_string tbl_name -d db_name

The following table includes options that are specific to ndb_drop_table. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_drop_table), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb_drop_表的选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndb_drop_table),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.325 Command-line options for the ndb_drop_table program

表21.325 ndb_drop_table程序的命令行选项

Format Description Added, Deprecated, or Removed

--database=dbname,

--数据库=dbname,

-d

-丁

Name of the database in which the table is found

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


21.4.13 ndb_error_reporter — NDB Error-Reporting Utility

ndb_error_reporter creates an archive from data node and management node log files that can be used to help diagnose bugs or other problems with a cluster. It is highly recommended that you make use of this utility when filing reports of bugs in NDB Cluster.

ndb_error_reporter从数据节点和管理节点日志文件创建一个存档文件,可用于帮助诊断群集的错误或其他问题。强烈建议您在提交ndb集群中的错误报告时使用此实用程序。

The following table includes command options specific to the NDB Cluster program ndb_error_reporter. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_error_reporter), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包括特定于ndb群集程序ndb_error_reporter的命令选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndb_error_reporter),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.326 Command-line options for the ndb_error_reporter program

表21.326 ndb_error_reporter程序的命令行选项

Format Description Added, Deprecated, or Removed

--connection-timeout=timeout

--连接超时=超时

Number of seconds to wait when connecting to nodes before timing out.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--dry-scp

--干SCP

Disable scp with remote hosts; used only for testing.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--fs

--四季酒店

Include file system data in error report; can use a large amount of disk space

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--skip-nodegroup=nodegroup_id

--skip nodegroup=节点组ID

Skip all nodes in the node group having this ID.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


Usage

ndb_error_reporter path/to/config-file [username] [options]

This utility is intended for use on a management node host, and requires the path to the management host configuration file (usually named config.ini). Optionally, you can supply the name of a user that is able to access the cluster's data nodes using SSH, to copy the data node log files. ndb_error_reporter then includes all of these files in archive that is created in the same directory in which it is run. The archive is named ndb_error_report_YYYYMMDDhhmmss.tar.bz2, where YYYYMMDDhhmmss is a datetime string.

此实用程序用于管理节点主机,需要管理主机配置文件(通常命名为config.ini)的路径。或者,您可以提供能够使用ssh访问集群数据节点的用户的名称,以复制数据节点日志文件。然后,ndb_error_reporter会将所有这些文件包含在运行文件的同一目录中创建的存档中。存档名为ndb_error_report_yyyymmddhhmmss.tar.bz2,其中yyyymmddhhmmss是日期时间字符串。

ndb_error_reporter also accepts the options listed here:

ndb_error_reporter也接受下面列出的选项:

  • --connection-timeout=timeout

    --连接超时=超时

    Property Value
    Command-Line Format --connection-timeout=timeout
    Type Integer
    Default Value 0

    Wait this many seconds when trying to connect to nodes before timing out.

    尝试连接到节点时等待此秒数,然后超时。

  • --dry-scp

    --干SCP

    Property Value
    Command-Line Format --dry-scp
    Type Boolean
    Default Value TRUE

    Run ndb_error_reporter without using scp from remote hosts. Used for testing only.

    在不使用远程主机的scp的情况下运行ndb_error_reporter。仅用于测试。

  • --fs

    --四季酒店

    Property Value
    Command-Line Format --fs
    Type Boolean
    Default Value FALSE

    Copy the data node file systems to the management host and include them in the archive.

    将数据节点文件系统复制到管理主机并将其包含在存档中。

    Because data node file systems can be extremely large, even after being compressed, we ask that you please do not send archives created using this option to Oracle unless you are specifically requested to do so.

    由于数据节点文件系统可能非常大,即使在被压缩之后,我们要求您不要将使用此选项创建的存档发送到Oracle,除非您被特别要求这样做。

  • --skip-nodegroup=nodegroup_id

    --skip nodegroup=节点组ID

    Property Value
    Command-Line Format --connection-timeout=timeout
    Type Integer
    Default Value 0

    Skip all nodes belong to the node group having the supplied node group ID.

    跳过属于具有提供的节点组ID的节点组的所有节点。

21.4.14 ndb_import — Import CSV Data Into NDB

ndb_import imports CSV-formatted data, such as that produced by mysqldump --tab, directly into NDB using the NDB API. ndb_import requires a connection to an NDB management server (ndb_mgmd) to function; it does not require a connection to a MySQL Server.

ndb_import使用ndb api将csv格式的数据(如mysqldump--tab生成的数据)直接导入ndb。ndb_import需要连接到ndb管理服务器(ndb_mgmd)才能正常工作;它不需要连接到mysql服务器。

Usage

ndb_import db_name file_name options

ndb_import requires two arguments. db_name is the name of the database where the table into which to import the data is found; file_name is the name of the CSV file from which to read the data; this must include the path to this file if it is not in the current directory. The name of the file must match that of the table; the file's extension, if any, is not taken into consideration. Options supported by ndb_import include those for specifying field separators, escapes, and line terminators, and are described later in this section. ndb_import must be able to connect to an NDB Cluster management server; for this reason, there must be an unused [api] slot in the cluster config.ini file.

ndb_导入需要两个参数。db_name是找到要导入数据的表的数据库的名称;file_name是要从中读取数据的csv文件的名称;如果该文件不在当前目录中,则必须包括该文件的路径。文件名必须与表名匹配;不考虑文件的扩展名(如果有)。ndb_import支持的选项包括用于指定字段分隔符、转义符和行结束符的选项,本节稍后将介绍这些选项。ndb_import必须能够连接到ndb群集管理服务器;因此,cluster config.ini文件中必须有未使用的[api]插槽。

To duplicate an existing table that uses a different storage engine, such as InnoDB, as an NDB table, use the mysql client to perform a SELECT INTO OUTFILE statement to export the existing table to a CSV file, then to execute a CREATE TABLE LIKE statement to create a new table having the same structure as the existing table, then perform ALTER TABLE ... ENGINE=NDB on the new table; after this, from the system shell, invoke ndb_import to load the data into the new NDB table. For example, an existing InnoDB table named myinnodb_table in a database named myinnodb can be exported into an NDB table named myndb_table in a database named myndb as shown here, assuming that you are already logged in as a MySQL user with the appropriate privileges:

若要复制使用不同存储引擎(如NYDB)作为NDB表的现有表,请使用MySQL客户端执行选择为Outfile语句,将现有表导出到CSV文件,然后执行创建表类语句,以创建具有与现有表相同结构的新表,然后执行alter table…引擎=新表上的ndb;之后,从系统shell调用ndb_import将数据加载到新的ndb表中。例如,一个名为myInDB的数据库中现有的名为myInBudIdTABLE表可以被导出到名为MyDB数据库中的名为MyBdByTabl的NDB表中,假设您已经登录为具有适当权限的MySQL用户:

  1. In the mysql client:

    在mysql客户端中:

    mysql> USE myinnodb;
    
    mysql> SELECT * INTO OUTFILE '/tmp/myndb_table.csv'
         >  FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\'
         >  LINES TERMINATED BY '\n'
         >  FROM myinnodbtable;
    
    mysql> CREATE DATABASE myndb;
    
    mysql> USE myndb;
    
    mysql> CREATE TABLE myndb_table LIKE myinnodb.myinnodb_table;
    
    mysql> ALTER TABLE myndb_table ENGINE=NDB;
    
    mysql> EXIT;
    Bye
    shell>
    

    Once the target database and table have been created, a running mysqld is no longer required. You can stop it using mysqladmin shutdown or another method before proceeding, if you wish.

    创建目标数据库和表后,不再需要运行mysqld。如果愿意,可以在继续之前使用mysqladmin shutdown或其他方法停止它。

  2. In the system shell:

    在系统外壳中:

    # if you are not already in the MySQL bin directory:
    shell> cd path-to-mysql-bin-dir
    
    shell> ndb_import myndb /tmp/myndb_table.csv --fields-optionally-enclosed-by='"' \
        --fields-terminated-by="," --fields-escaped-by='\\'
    

    The output should resemble what is shown here:

    输出应与此处所示类似:

    job-1 import myndb.myndb_table from /tmp/myndb_table.csv
    job-1 [running] import myndb.myndb_table from /tmp/myndb_table.csv
    job-1 [success] import myndb.myndb_table from /tmp/myndb_table.csv
    job-1 imported 19984 rows in 0h0m9s at 2277 rows/s
    jobs summary: defined: 1 run: 1 with success: 1 with failure: 0
    shell>
    

The following table includes options that are specific to ndb_import. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_import), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb_导入的选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndb_导入),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.327 Command-line options for the ndb_import program

表21.327 ndb_u导入程序的命令行选项

Format Description Added, Deprecated, or Removed

--abort-on-error

--出错时中止

Dump core on any fatal error; used for debugging

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--ai-increment=#

--人工智能增量=#

For table with hidden PK, specify autoincrement increment. See mysqld

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--ai-offset=#

--人工智能偏移=#

For table with hidden PK, specify autoincrement offset. See mysqld

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--ai-prefetch-sz=#

--人工智能预取sz=#

For table with hidden PK, specify number of autoincrement values that are prefetched. See mysqld

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--connections=#

--连接=#

Number of cluster connections to create

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--continue

--继续

When job fails, continue to next job

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--db-workers=#

--DB工人=#

Number of threads, per data node, executing database operations

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--errins-type=name

--errins type=名称

Error insert type, for testing purposes; use "list" to obtain all possible values

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--errins-delay=#

--错误延迟=#

Error insert delay in milliseconds; random variation is added

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--fields-enclosed-by=char

--由=char括起的字段

Same as FIELDS ENCLOSED BY option for LOAD DATA statements. For CSV input this is same as using --fields-optionally-enclosed-by

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--fields-escaped-by=name

--字段转义方式=名称

Same as FIELDS ESCAPED BY option for LOAD DATA statements

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--fields-optionally-enclosed-by=char

--可选地由=char括起的字段

Same as FIELDS OPTIONALLY ENCLOSED BY option for LOAD DATA statements

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--fields-terminated-by=char

--以=char结尾的字段

Same as FIELDS TERMINATED BY option for LOAD DATA statements.

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--idlesleep=#

--空闲睡眠=#

Number of milliseconds to sleep waiting for more to do

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--idlespin=#

--懒散=#

Number of times to re-try before idlesleep

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--ignore-lines=#

--忽略行=#

Ignore first # lines in input file. Used to skip a non-data header.

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--input-type=name

--输入类型=名称

Input type: random or csv

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--input-workers=#

--投入工人=#

Number of threads processing input. Must be 2 or more if --input-type is csv.

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--keep-state

--保持状态

Preserve state files

ADDED: NDB 7.6.4

增加:NDB 7.6.4

--lines-terminated-by=name

--以=名称结尾的行

Same as LINES TERMINATED BY option for LOAD DATA statements

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--log-level=#

--日志级别=#

Set internal logging level; for debugging and development

ADDED: NDB 7.6.4

增加:NDB 7.6.4

--max-rows=#

--最大行数=#

Import only this number of input data rows; default is 0, which imports all rows

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--monitor=#

--监视器=#

Periodically print status of running job if something has changed (status, rejected rows, temporary errors). Value 0 disables. Value 1 prints any change seen. Higher values reduce status printing exponentially up to some pre-defined limit.

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--no-asynch

--无异步

Run database operations as batches, in single transactions

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--no-hint

--没有暗示

Do not use distribution key hint to select data node (TC)

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--opbatch=#

--操作批处理=#

A db execution batch is a set of transactions and operations sent to NDB kernel. This option limits NDB operations (including blob operations) in a db execution batch. Therefore it also limits number of asynch transactions. Value 0 is not valid

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--opbytes=#

--操作字节=#

Limit bytes in execution batch (default 0 = no limit)

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--output-type=name

--输出类型=名称

Output type: ndb is default, null used for testing

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--output-workers=#

--产出工人=#

Number of threads processing output or relaying database operations

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--pagesize=#

--页面大小=#

Align I/O buffers to given size

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--pagecnt=#

--页码=#

Size of I/O buffers as multiple of page size. CSV input worker allocates a double-sized buffer

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--polltimeout=#

--轮询超时=#

Timeout per poll for completed asynchonous transactions; polling continues until all polls are completed, or error occurs

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--rejects=#

--拒绝=#

Limit number of rejected rows (rows with permanent error) in data load. Default is 0 which means that any rejected row causes a fatal error. The row exceeding the limit is also added to *.rej

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--resume

--简历

If job aborted (temporary error, user interrupt), resume with rows not yet processed

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--rowbatch=#

--行批处理=#

Limit rows in row queues (default 0 = no limit); must be 1 or more if --input-type is random

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--rowbytes=#

--行字节=#

Limit bytes in row queues (0 = no limit)

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--state-dir=name

--state dir=名称

Where to write state files; currect directory is default

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--stats

--统计

Save performance and statistics information in *.sto and *.stt files

ADDED: NDB 7.6.4

增加:NDB 7.6.4

--tempdelay=#

--温度延迟=#

Number of milliseconds to sleep between temporary errors

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--temperrors=#

--回火器=#

Number of times a transaction can fail due to a temporary error, per execution batch; 0 means any temporary error is fatal. Such errors do not cause any rows to be written to .rej file

ADDED: NDB 7.6.2

增加:NDB 7.6.2

--verbose=#,

--详细=,

-v

-五

Enable verbose output

ADDED: NDB 7.6.2

增加:NDB 7.6.2


  • --abort-on-error

    --出错时中止

    Property Value
    Command-Line Format --abort-on-error
    Introduced 5.7.18-ndb-7.6.2
    Type Boolean
    Default Value FALSE

    Dump core on any fatal error; used for debugging only.

    在任何致命错误上转储核心;仅用于调试。

  • --ai-increment=#

    --人工智能增量=#

    Property Value
    Command-Line Format --ai-increment=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 1
    Minimum Value 1
    Maximum Value 4294967295

    For a table with a hidden primary key, specify the autoincrement increment, like the auto_increment_increment system variable does in the MySQL Server.

    对于具有隐藏主键的表,指定auto increment increment,就像mysql服务器中的auto_increment系统变量一样。

  • --ai-offset=#

    --人工智能偏移=#

    Property Value
    Command-Line Format --ai-offset=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 1
    Minimum Value 1
    Maximum Value 4294967295

    For a table with hidden primary key, specify the autoincrement offset. Similar to the auto_increment_offset system variable.

    对于具有隐藏主键的表,请指定自动增量偏移。类似于auto_increment_offset系统变量。

  • --ai-prefetch-sz=#

    --人工智能预取sz=#

    Property Value
    Command-Line Format --ai-prefetch-sz=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 1024
    Minimum Value 1
    Maximum Value 4294967295

    For a table with a hidden primary key, specify the number of autoincrement values that are prefetched. Behaves like the ndb_autoincrement_prefetch_sz system variable does in the MySQL Server.

    对于具有隐藏主键的表,请指定预取的自动增量值的数目。其行为类似于mysql服务器中的ndb_autoincrement_prefetch_sz系统变量。

  • --connections=#

    --连接=#

    Property Value
    Command-Line Format --connections=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 1
    Minimum Value 1
    Maximum Value 4294967295

    Number of cluster connections to create.

    要创建的群集连接数。

  • --continue

    --继续

    Property Value
    Command-Line Format --continue
    Introduced 5.7.18-ndb-7.6.2
    Type Boolean
    Default Value FALSE

    When a job fails, continue to the next job.

    当作业失败时,继续下一个作业。

  • --db-workers=#

    --DB工人=#

    Property Value
    Command-Line Format --db-workers=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value (>= 5.7.20-ndb-7.6.4) 4
    Default Value (>= 5.7.18-ndb-7.6.2, <= 5.7.18-ndb-7.6.3) 1
    Minimum Value 1
    Maximum Value 4294967295

    Number of threads, per data node, executing database operations.

    每个数据节点执行数据库操作的线程数。

  • --errins-type=name

    --errins type=名称

    Property Value
    Command-Line Format --errins-type=name
    Introduced 5.7.18-ndb-7.6.2
    Type Enumeration
    Default Value [none]
    Valid Values

    stopjob

    临时工

    stopall

    停止符

    sighup

    小精灵

    sigint

    西格特

    list

    列表

    Error insert type; use list as the name value to obtain all possible values. This option is used for testing purposes only.

    插入类型时出错;请使用list作为名称值以获取所有可能的值。此选项仅用于测试目的。

  • --errins-delay=#

    --错误延迟=#

    Property Value
    Command-Line Format --errins-delay=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 1000
    Minimum Value 0
    Maximum Value 4294967295

    Error insert delay in milliseconds; random variation is added. This option is used for testing purposes only.

    以毫秒为单位的错误插入延迟;添加了随机变量。此选项仅用于测试目的。

  • --fields-enclosed-by=char

    --由=char括起的字段

    Property Value
    Command-Line Format --fields-enclosed-by=char
    Introduced 5.7.18-ndb-7.6.2
    Type String
    Default Value [none]

    This works in the same way as the FIELDS ENCLOSED BY option does for the LOAD DATA statement, specifying a character to be interpeted as quoting field values. For CSV input, this is the same as --fields-optionally-enclosed-by.

    这与option为load data语句括起的字段的工作方式相同,它指定一个字符作为引用字段值。对于csv输入,这与可选地由括起来的--fields相同。

  • --fields-escaped-by=name

    --字段转义方式=名称

    Property Value
    Command-Line Format --fields-escaped-by=name
    Introduced 5.7.18-ndb-7.6.2
    Type String
    Default Value \

    Specify an escape character in the same way as the FIELDS ESCAPED BY option does for the SQL LOAD DATA statement.

    指定转义字符的方式与sql load data语句的字段转义方式相同。

  • --fields-optionally-enclosed-by=char

    --可选地由=char括起的字段

    Property Value
    Command-Line Format --fields-optionally-enclosed-by=char
    Introduced 5.7.18-ndb-7.6.2
    Type String
    Default Value [none]

    This works in the same way as the FIELDS OPTIONALLY ENCLOSED BY option does for the LOAD DATA statement, specifying a character to be interpeted as optionally quoting field values. For CSV input, this is the same as --fields-enclosed-by.

    这与option为load data语句指定可选引用字段值时可选包含的字段的工作方式相同。对于csv输入,这与由括起来的--fields相同。

  • --fields-terminated-by=char

    --以=char结尾的字段

    Property Value
    Command-Line Format --fields-terminated-by=char
    Introduced 5.7.18-ndb-7.6.2
    Type String
    Default Value \t

    This works in the same way as the FIELDS TERMINATED BY option does for the LOAD DATA statement, specifying a character to be interpeted as the field separator.

    这与load data语句中的fields terminated by选项的工作方式相同,它指定一个字符作为字段分隔符。

  • --idlesleep=#

    --空闲睡眠=#

    Property Value
    Command-Line Format --idlesleep=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 1
    Minimum Value 1
    Maximum Value 4294967295

    Number of milliseconds to sleep waiting for more work to perform.

    等待执行更多工作的睡眠毫秒数。

  • --idlespin=#

    --懒散=#

    Property Value
    Command-Line Format --idlespin=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 4294967295

    Number of times to retry before sleeping.

    睡觉前重试的次数。

  • --ignore-lines=#

    --忽略行=#

    Property Value
    Command-Line Format --ignore-lines=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 4294967295

    Cause ndb_import to ignore the first # lines of the input file. This can be employed to skip a file header that does not contain any data.

    使ndb_u导入忽略输入文件的第一行。这可用于跳过不包含任何数据的文件头。

  • --input-type=name

    --输入类型=名称

    Property Value
    Command-Line Format --input-type=name
    Introduced 5.7.18-ndb-7.6.2
    Type Enumeration
    Default Value csv
    Valid Values

    random

    随机的

    csv

    CSV公司

    Set the type of input type. The default is csv; random is intended for testing purposes only. .

    设置输入类型的类型。默认值为csv;random仅用于测试目的。是的。

  • --input-workers=#

    --投入工人=#

    Property Value
    Command-Line Format --input-workers=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value (>= 5.7.20-ndb-7.6.4) 4
    Default Value (>= 5.7.18-ndb-7.6.2, <= 5.7.18-ndb-7.6.3) 2
    Minimum Value 1
    Maximum Value 4294967295

    Set the number of threads processing input.

    设置处理输入的线程数。

  • --keep-state

    --保持状态

    Property Value
    Command-Line Format --keep-state
    Introduced 5.7.20-ndb-7.6.4
    Type Boolean
    Default Value false

    By default, ndb_import removes all state files (except non-empty *.rej files) when it completes a job. Specify this option (nor argument is required) to force the program to retain all state files instead.

    默认情况下,ndb_import在完成作业时删除所有状态文件(非空的*.rej文件除外)。指定此选项(不需要参数)以强制程序保留所有状态文件。

  • --lines-terminated-by=name

    --以=名称结尾的行

    Property Value
    Command-Line Format --lines-terminated-by=name
    Introduced 5.7.18-ndb-7.6.2
    Type String
    Default Value \n

    This works in the same way as the LINES TERMINATED BY option does for the LOAD DATA statement, specifying a character to be interpeted as end-of-line.

    它的工作方式与load data语句中option终止的行的工作方式相同,它指定要作为行尾插入的字符。

  • --log-level=#

    --日志级别=#

    Property Value
    Command-Line Format --log-level=#
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 2

    Performs internal logging at the given level. This option is intended primarily for internal and development use.

    在给定级别执行内部日志记录。此选项主要用于内部和开发用途。

    In debug builds of NDB only, the logging level can be set using this option to a maximum of 4.

    仅在NDB的调试构建中,可以使用此选项将日志记录级别设置为最多4。

  • --max-rows=#

    --最大行数=#

    Property Value
    Command-Line Format --max-rows=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 4294967295

    Import only this number of input data rows; the default is 0, which imports all rows.

    仅导入此数量的输入数据行;默认值为0,即导入所有行。

  • --monitor=#

    --监视器=#

    Property Value
    Command-Line Format --monitor=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 2
    Minimum Value 0
    Maximum Value 4294967295

    Periodically print the status of a running job if something has changed (status, rejected rows, temporary errors). Set to 0 to disable this reporting. Setting to 1 prints any change that is seen. Higher values reduce the frequency of this status reporting.

    如果发生更改(状态、被拒绝的行、临时错误),则定期打印正在运行的作业的状态。设置为0可禁用此报告。设置为1将打印所看到的任何更改。较高的值会降低此状态报告的频率。

  • --no-asynch

    --无异步

    Property Value
    Command-Line Format --no-asynch
    Introduced 5.7.18-ndb-7.6.2
    Type Boolean
    Default Value FALSE

    Run database operations as batches, in single transactions.

    在单个事务中以批处理方式运行数据库操作。

  • --no-hint

    --没有暗示

    Property Value
    Command-Line Format --no-hint
    Introduced 5.7.18-ndb-7.6.2
    Type Boolean
    Default Value FALSE

    Do not use distribution key hinting to select a data node.

    不要使用分发密钥提示来选择数据节点。

  • --opbatch=#

    --操作批处理=#

    Property Value
    Command-Line Format --opbatch=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 256
    Minimum Value 1
    Maximum Value 4294967295

    Set a limit on the number of operations (including blob operations), and thus the number of asynchronous transactions, per execution batch.

    对每个执行批处理的操作数(包括blob操作)以及异步事务数设置限制。

  • --opbytes=#

    --操作字节=#

    Property Value
    Command-Line Format --opbytes=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 4294967295

    Set a limit on the number of bytes per execution batch. Use 0 for no limit.

    设置每个执行批处理的字节数限制。使用0表示无限制。

  • --output-type=name

    --输出类型=名称

    Property Value
    Command-Line Format --output-type=name
    Introduced 5.7.18-ndb-7.6.2
    Type Enumeration
    Default Value ndb
    Valid Values null

    Set the output type. ndb is the default. null is used only for testing.

    设置输出类型。ndb是默认值。空值仅用于测试。

  • --output-workers=#

    --产出工人=#

    Property Value
    Command-Line Format --output-workers=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 2
    Minimum Value 1
    Maximum Value 4294967295

    Set the number of threads processing output or relaying database operations.

    设置处理输出或中继数据库操作的线程数。

  • --pagesize=#

    --页面大小=#

    Property Value
    Command-Line Format --pagesize=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 4096
    Minimum Value 1
    Maximum Value 4294967295

    Align I/O buffers to the given size.

    将I/O缓冲区与给定大小对齐。

  • --pagecnt=#

    --页码=#

    Property Value
    Command-Line Format --pagecnt=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 64
    Minimum Value 1
    Maximum Value 4294967295

    Set the size of I/O buffers as multiple of page size. The CSV input worker allocates buffer that is doubled in size.

    将I/O缓冲区的大小设置为页面大小的倍数。csv输入工作进程分配的缓冲区大小是原来的两倍。

  • --polltimeout=#

    --轮询超时=#

    Property Value
    Command-Line Format --polltimeout=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 1000
    Minimum Value 1
    Maximum Value 4294967295

    Set a timeout per poll for completed asynchonous transactions; polling continues until all polls are completed, or until an error occurs.

    为已完成的异步事务设置每次轮询的超时时间;轮询将继续,直到完成所有轮询或发生错误为止。

  • --rejects=#

    --拒绝=#

    Property Value
    Command-Line Format --rejects=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 4294967295

    Limit the number of rejected rows (rows with permanent errors) in the data load. The default is 0, which means that any rejected row causes a fatal error. Any rows causing the limit to be exceeded are added to the .rej file.

    限制数据加载中被拒绝的行数(具有永久错误的行)。默认值为0,这意味着任何被拒绝的行都会导致致命错误。任何导致超出限制的行都将添加到.rej文件中。

    The limit imposed by this option is effective for the duration of the current run. A run restarted using --resume is considered a new run for this purpose.

    此选项设置的限制在当前运行期间有效。为此,使用--resume重新启动的运行被视为“新”运行。

  • --resume

    --简历

    Property Value
    Command-Line Format --resume
    Introduced 5.7.18-ndb-7.6.2
    Type Boolean
    Default Value FALSE

    If a job is aborted (due to a temporary db error or when interrupted by the user), resume with any rows not yet processed.

    如果作业被中止(由于临时数据库错误或被用户中断),请继续执行任何尚未处理的行。

  • --rowbatch=#

    --行批处理=#

    Property Value
    Command-Line Format --rowbatch=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 4294967295

    Set a limit on the number of rows per row queue. Use 0 for no limit.

    对每行队列的行数设置限制。使用0表示无限制。

  • --rowbytes=#

    --行字节=#

    Property Value
    Command-Line Format --rowbytes=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 262144
    Minimum Value 0
    Maximum Value 4294967295

    Set a limit on the number of bytes per row queue. Use 0 for no limit.

    设置每行队列的字节数限制。使用0表示无限制。

  • --stats

    --统计

    Property Value
    Command-Line Format --stats
    Introduced 5.7.20-ndb-7.6.4
    Type Boolean
    Default Value false

    Save information about options related to performance and other internal statistics in files named *.sto and *.stt. These files are always kept on successful completion (even if --keep-state is not also specified).

    在名为*.sto和*.stt的文件中保存有关性能和其他内部统计信息的选项的信息。这些文件总是在成功完成时保存(即使未指定--keep state)。

  • --state-dir=name

    --state dir=名称

    Property Value
    Command-Line Format --state-dir=name
    Introduced 5.7.18-ndb-7.6.2
    Type String
    Default Value .

    Where to write the state files (tbl_name.map, tbl_name.rej, tbl_name.res, and tbl_name.stt) produced by a run of the program; the default is the current directory.

    写入程序运行生成的状态文件(tbl_name.map、tbl_name.rej、tbl_name.res和tbl_name.stt)的位置;默认为当前目录。

  • --tempdelay=#

    --温度延迟=#

    Property Value
    Command-Line Format --tempdelay=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 10
    Minimum Value 0
    Maximum Value 4294967295

    Number of milliseconds to sleep between temporary errors.

    在两个临时错误之间休眠的毫秒数。

  • --temperrors=#

    --回火器=#

    Property Value
    Command-Line Format --temperrors=#
    Introduced 5.7.18-ndb-7.6.2
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 4294967295

    Number of times a transaction can fail due to a temporary error, per execution batch. The default is 0, which means that any temporary error is fatal. Temporary errors do not cause any rows to be added to the .rej file.

    每个执行批处理由于临时错误而导致事务失败的次数。默认值为0,这意味着任何临时错误都是致命的。临时错误不会导致向.rej文件添加任何行。

  • --verbose, -v

    --冗长,-v

    Property Value
    Command-Line Format --verbose
    Introduced 5.7.18-ndb-7.6.2
    Type (>= 5.7.20-ndb-7.6.4) Boolean
    Type (>= 5.7.18-ndb-7.6.2, <= 5.7.18-ndb-7.6.3) Integer
    Default Value (>= 5.7.20-ndb-7.6.4) false
    Default Value (>= 5.7.18-ndb-7.6.2, <= 5.7.18-ndb-7.6.3) 0
    Minimum Value 0
    Maximum Value 2

    NDB 7.6.4 and later: Enable verbose output.

    ndb 7.6.4及更高版本:启用详细输出。

    Previously, this option controlled the internal logging level for debugging messages. In NDB 7.6.4 and later, use the --log-level option for this purpose instead.

    以前,此选项控制调试消息的内部日志记录级别。在ndb 7.6.4和更高版本中,为此目的而使用--log level选项。

As with LOAD DATA, options for field and line formatting much match those used to create the CSV file, whether this was done using SELECT INTO ... OUTFILE, or by some other means. There is no equivalent to the LOAD DATA statement STARTING WITH option.

与加载数据一样,字段和行格式的选项与创建csv文件的选项非常匹配,这是否是使用select into完成的…外卖,或通过其他方式。没有等效于以option开头的load data语句。

ndb_import was added in NDB 7.6.2.

在ndb 7.6.2中添加了ndb_导入。

21.4.15 ndb_index_stat — NDB Index Statistics Utility

ndb_index_stat provides per-fragment statistical information about indexes on NDB tables. This includes cache version and age, number of index entries per partition, and memory consumption by indexes.

ndb_index_stat提供有关ndb表上索引的每段统计信息。这包括缓存版本和使用年限、每个分区的索引条目数以及索引的内存消耗。

Usage

To obtain basic index statistics about a given NDB table, invoke ndb_index_stat as shown here, with the name of the table as the first argument and the name of the database containing this table specified immediately following it, using the --database (-d) option:

要获取有关给定ndb表的基本索引统计信息,请使用--database(-d)选项调用如下所示的ndb_index_stat,其中表的名称是第一个参数,包含此表的数据库的名称紧随其后:

ndb_index_stat table -d database

In this example, we use ndb_index_stat to obtain such information about an NDB table named mytable in the test database:

在本例中,我们使用ndb_index_stat获取有关测试数据库中名为mytable的ndb表的信息:

shell> ndb_index_stat -d test mytable
table:City index:PRIMARY fragCount:2
sampleVersion:3 loadTime:1399585986 sampleCount:1994 keyBytes:7976
query cache: valid:1 sampleCount:1994 totalBytes:27916
times in ms: save: 7.133 sort: 1.974 sort per sample: 0.000

NDBT_ProgramExit: 0 - OK

sampleVersion is the version number of the cache from which the statistics data is taken. Running ndb_index_stat with the --update option causes sampleVersion to be incremented.

sampleversion是从中获取统计数据的缓存的版本号。使用--update选项运行ndb_index_stat会导致sampleversion增加。

loadTime shows when the cache was last updated. This is expressed as seconds since the Unix Epoch.

loadtime显示缓存上次更新的时间。这表示为自unix时代以来的秒数。

sampleCount is the number of index entries found per partition. You can estimate the total number of entries by multiplying this by the number of fragments (shown as fragCount).

SampleCount是每个分区找到的索引项数。您可以通过将此值乘以片段数(显示为fragcount)来估计条目总数。

sampleCount can be compared with the cardinality of SHOW INDEX or INFORMATION_SCHEMA.STATISTICS, although the latter two provide a view of the table as a whole, while ndb_index_stat provides a per-fragment average.

samplecount可以与show index或information_schema.statistics的基数进行比较,尽管后者提供了整个表的视图,而ndb_index_stat提供了每个片段的平均值。

keyBytes is the number of bytes used by the index. In this example, the primary key is an integer, which requires four bytes for each index, so keyBytes can be calculated in this case as shown here:

KeyBytes是索引使用的字节数。在本例中,主键是一个整数,每个索引需要4个字节,因此在这种情况下可以计算key bytes,如下所示:

    keyBytes = sampleCount * (4 bytes per index) = 1994 * 4 = 7976

This information can also be obtained using the corresponding column definitions from INFORMATION_SCHEMA.COLUMNS (this requires a MySQL Server and a MySQL client application).

也可以使用information_schema.columns中相应的列定义来获取此信息(这需要一个mysql服务器和一个mysql客户端应用程序)。

totalBytes is the total memory consumed by all indexes on the table, in bytes.

total bytes是表中所有索引消耗的总内存,以字节为单位。

Timings shown in the preceding examples are specific to each invocation of ndb_index_stat.

上述示例中显示的计时特定于每次调用ndb_index_stat。

The --verbose option provides some additional output, as shown here:

--verbose选项提供了一些额外的输出,如下所示:

shell> ndb_index_stat -d test mytable --verbose
random seed 1337010518
connected
loop 1 of 1
table:mytable index:PRIMARY fragCount:4
sampleVersion:2 loadTime:1336751773 sampleCount:0 keyBytes:0
read stats
query cache created
query cache: valid:1 sampleCount:0 totalBytes:0
times in ms: save: 20.766 sort: 0.001
disconnected

NDBT_ProgramExit: 0 - OK

shell>

If the only output from the program is NDBT_ProgramExit: 0 - OK, this may indicate that no statistics yet exist. To force them to be created (or updated if they already exist), invoke ndb_index_stat with the --update option, or execute ANALYZE TABLE on the table in the mysql client.

如果程序的唯一输出是NdttxPieldExc: 0—OK,这可能表明没有统计数据存在。要强制创建它们(或者如果它们已经存在就被更新),请调用-NoBuestOpdate选项调用NbjiDexxStAT,或者在MySQL客户端的表上执行分析表。

Options

The following table includes options that are specific to the NDB Cluster ndb_index_stat utility. Additional descriptions are listed following the table. For options common to most NDB Cluster programs (including ndb_index_stat), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb cluster ndb_index_stat实用程序的选项。下表列出了其他说明。有关大多数ndb群集程序的公用选项(包括ndb_index_stat),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.328 Command-line options for the ndb_index_stat program

表21.328 ndb_index_stat程序的命令行选项

Format Description Added, Deprecated, or Removed

--database=name,

--数据库=名称,

-d

-丁

Name of the database containing the table.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--delete

--删除

Delete index statistics for the given table, stopping any auto-update previously configured.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--update

--更新

Update index statistics for the given table, restarting any auto-update previously configured.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--dump

--倾倒

Print the query cache.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--query=#

--查询=#

Perform a number of random range queries on first key attr (must be int unsigned).

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--sys-drop

--系统下降

Drop any statistics tables and events in NDB kernel (all statistics are lost)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--sys-create

--系统创建

Create all statistics tables and events in NDB kernel, if none of them already exist

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--sys-create-if-not-exist

--sys创建如果不存在

Create any statistics tables and events in NDB kernel that do not already exist.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--sys-create-if-not-valid

--如果无效,则创建sys

Create any statistics tables or events that do not already exist in the NDB kernel. after dropping any that are invalid.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--sys-check

--系统检查

Verify that NDB system index statistics and event tables exist.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--sys-skip-tables

--系统跳过表

Do not apply sys-* options to tables.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--sys-skip-events

--系统跳过事件

Do not apply sys-* options to events.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--verbose,

--冗长,

-v

-五

Turn on verbose output

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--loops=#

--循环=#

Set the number of times to perform a given command. Default is 0.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


ndb_index_stat statistics options.  The following options are used to generate index statistics. They work with a given table and database. They cannot be mixed with system options (see ndb_index_stat system options).

ndb_index_stat统计选项。以下选项用于生成索引统计信息。它们使用给定的表和数据库。它们不能与系统选项混合(请参见ndb_index_stat system options)。

  • --database=name, -d name

    --数据库=名称,-d名称

    Property Value
    Command-Line Format --database=name
    Type String
    Default Value [none]
    Minimum Value
    Maximum Value

    The name of the database that contains the table being queried.

    包含正在查询的表的数据库的名称。

  • --delete

    --删除

    Property Value
    Command-Line Format --delete
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Delete the index statistics for the given table, stopping any auto-update that was previously configured.

    删除给定表的索引统计信息,停止以前配置的任何自动更新。

  • --update

    --更新

    Property Value
    Command-Line Format --update
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Update the index statistics for the given table, and restart any auto-update that was previously configured.

    更新给定表的索引统计信息,并重新启动以前配置的任何自动更新。

  • --dump

    --倾倒

    Property Value
    Command-Line Format --dump
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Dump the contents of the query cache.

    转储查询缓存的内容。

  • --query=#

    --查询=#

    Property Value
    Command-Line Format --query=#
    Type Numeric
    Default Value 0
    Minimum Value 0
    Maximum Value MAX_INT

    Perform random range queries on first key attribute (must be int unsigned).

    对第一个键属性执行随机范围查询(必须是int unsigned)。

ndb_index_stat system options.  The following options are used to generate and update the statistics tables in the NDB kernel. None of these options can be mixed with statistics options (see ndb_index_stat statistics options).

ndb_index_stat系统选项。以下选项用于生成和更新ndb内核中的统计表。这些选项都不能与统计选项混合使用(请参见ndb_index_stat statistics options)。

  • --sys-drop

    --系统下降

    Property Value
    Command-Line Format --sys-drop
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Drop all statistics tables and events in the NDB kernel. This causes all statistics to be lost.

    删除ndb内核中的所有统计表和事件。这会导致所有统计数据丢失。

  • --sys-create

    --系统创建

    Property Value
    Command-Line Format --sys-create
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Create all statistics tables and events in the NDB kernel. This works only if none of them exist previously.

    在ndb内核中创建所有统计表和事件。只有在以前没有一个这样的情况下才有效。

  • sys-create-if-not-exist

    如果不存在系统创建

    Property Value
    Command-Line Format --sys-create-if-not-exist
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Create any NDB system statistics tables or events (or both) that do not already exist when the program is invoked.

    创建在调用程序时不存在的任何NDB系统统计表或事件(或两者)。

  • --sys-create-if-not-valid

    --如果无效,则创建sys

    Property Value
    Command-Line Format --sys-create-if-not-valid
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Create any NDB system statistics tables or events that do not already exist, after dropping any that are invalid.

    在删除任何无效的NDB系统统计表或事件之后,删除任何NDB系统统计表或事件。

  • --sys-check

    --系统检查

    Property Value
    Command-Line Format --sys-check
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Verify that all required system statistics tables and events exist in the NDB kernel.

    验证所有必需的系统统计表和事件是否存在于NDB内核中。

  • --sys-skip-tables

    --系统跳过表

    Property Value
    Command-Line Format --sys-skip-tables
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Do not apply any --sys-* options to any statistics tables.

    不要对任何统计表应用任何--sys-*选项。

  • --sys-skip-events

    --系统跳过事件

    Property Value
    Command-Line Format --sys-skip-events
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Do not apply any --sys-* options to any events.

    不要对任何事件应用任何--sys-*选项。

  • --verbose

    --冗长的

    Property Value
    Command-Line Format --verbose
    Type Boolean
    Default Value false
    Minimum Value
    Maximum Value

    Turn on verbose output.

    打开详细输出。

  • --loops=#

    --循环=#

    Property Value
    Command-Line Format --loops=#
    Type Numeric
    Default Value 0
    Minimum Value 0
    Maximum Value MAX_INT

    Repeat commands this number of times (for use in testing).

    重复此次数的命令(用于测试)。

21.4.16 ndb_move_data — NDB Data Copy Utility

ndb_move_data copies data from one NDB table to another.

ndb_move_data将数据从一个ndb表复制到另一个ndb表。

Usage

The program is invoked with the names of the source and target tables; either or both of these may be qualified optionally with the database name. Both tables must use the NDB storage engine.

用源表和目标表的名称调用程序;其中一个或两个表可以用数据库名称进行限定。两个表都必须使用ndb存储引擎。

ndb_move_data options source target

The following table includes options that are specific to ndb_move_data. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_move_data), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb_move_数据的选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndb_move_数据),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.329 Command-line options for the ndb_move_data program

表21.329 ndb_move_数据程序的命令行选项

Format Description Added, Deprecated, or Removed

--abort-on-error

--出错时中止

Dump core on permanent error (debug option)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--character-sets-dir=name

--字符集dir=名称

Directory where character sets are

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--database=dbname,

--数据库=dbname,

-d

-丁

Name of the database in which the table is found

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--drop-source

--投放源

Drop source table after all rows have been moved

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--error-insert

--错误插入

Insert random temporary errors (testing option)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--exclude-missing-columns

--排除缺少的列

Ignore extra columns in source or target table

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--lossy-conversions,

--有损转换,

-l

-一

Allow attribute data to be truncated when converted to a smaller type

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--promote-attributes,

--提升属性,

-A

-一个

Allow attribute data to be converted to a larger type

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--staging-tries=x[,y[,z]]

--分段尝试=x[,y[,z]]

Specify tries on temporary errors. Format is x[,y[,z]] where x=max tries (0=no limit), y=min delay (ms), z=max delay (ms)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--verbose

--冗长的

Enable verbose messages

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


  • --abort-on-error

    --出错时中止

    Property Value
    Command-Line Format --abort-on-error
    Type Boolean
    Default Value FALSE

    Dump core on permanent error (debug option).

    永久性错误时转储核心(调试选项)。

  • --character-sets-dir=name

    --字符集dir=名称

    Property Value
    Command-Line Format --character-sets-dir=name
    Type String
    Default Value [none]

    Directory where character sets are.

    字符集所在的目录。

  • --database=dbname, -d

    --数据库=dbname,-d

    Property Value
    Command-Line Format --database=dbname
    Type String
    Default Value TEST_DB

    Name of the database in which the table is found.

    在其中找到表的数据库的名称。

  • --drop-source

    --投放源

    Property Value
    Command-Line Format --drop-source
    Type Boolean
    Default Value FALSE

    Drop source table after all rows have been moved.

    移动所有行后删除源表。

  • --error-insert

    --错误插入

    Property Value
    Command-Line Format --error-insert
    Type Boolean
    Default Value FALSE

    Insert random temporary errors (testing option).

    插入随机临时错误(测试选项)。

  • --exclude-missing-columns

    --排除缺少的列

    Property Value
    Command-Line Format --exclude-missing-columns
    Type Boolean
    Default Value FALSE

    Ignore extra columns in source or target table.

    忽略源表或目标表中的额外列。

  • --lossy-conversions, -l

    --有损转换,-l

    Property Value
    Command-Line Format --lossy-conversions
    Type Boolean
    Default Value FALSE

    Allow attribute data to be truncated when converted to a smaller type.

    允许在转换为较小类型时截断属性数据。

  • --promote-attributes, -A

    --提升属性,-a

    Property Value
    Command-Line Format --promote-attributes
    Type Boolean
    Default Value FALSE

    Allow attribute data to be converted to a larger type.

    允许将属性数据转换为更大的类型。

  • --staging-tries=x[,y[,z]]

    --分段尝试=x[,y[,z]]

    Property Value
    Command-Line Format --staging-tries=x[,y[,z]]
    Type String
    Default Value 0,1000,60000

    Specify tries on temporary errors. Format is x[,y[,z]] where x=max tries (0=no limit), y=min delay (ms), z=max delay (ms).

    指定对临时错误的尝试。格式为x[,y[,z]],其中x=最大尝试(0=无限制),y=最小延迟(ms),z=最大延迟(ms)。

  • --verbose

    --冗长的

    Property Value
    Command-Line Format --verbose
    Type Boolean
    Default Value FALSE

    Enable verbose messages.

    启用详细消息。

21.4.17 ndb_perror — Obtain NDB Error Message Information

ndb_perror shows information about an NDB error, given its error code. This includes the error message, the type of error, and whether the error is permanent or temporary. Added to the MySQL NDB Cluster distribution in NDB 7.6.4, it is intended as a drop-in replacement for perror --ndb.

ndb_perror显示有关ndb错误的信息,给出其错误代码。这包括错误消息、错误类型以及错误是永久的还是临时的。添加到ndb 7.6.4中的mysql ndb集群发行版中,它旨在作为perror--ndb的替代品。

Usage

ndb_perror [options] error_code

ndb_perror does not need to access a running NDB Cluster, or any nodes (including SQL nodes). To view information about a given NDB error, invoke the program, using the error code as an argument, like this:

ndb_perror不需要访问正在运行的ndb集群或任何节点(包括sql节点)。要查看有关给定ndb错误的信息,请使用错误代码作为参数调用程序,如下所示:

shell> ndb_perror 323
NDB error code 323: Invalid nodegroup id, nodegroup already existing: Permanent error: Application error

To display only the error message, invoke ndb_perror with the --silent option (short form -s), as shown here:

要仅显示错误消息,请使用--silent选项(缩写-s)调用ndb_perror,如下所示:

shell> ndb_perror -s 323
Invalid nodegroup id, nodegroup already existing: Permanent error: Application error

Like perror, ndb_perror accepts multiple error codes:

与perror一样,ndb_perror也接受多个错误代码:

shell> ndb_perror 321 1001
NDB error code 321: Invalid nodegroup id: Permanent error: Application error
NDB error code 1001: Illegal connect string

Additional program options for ndb_perror are described later in this section.

本节后面将介绍ndb_perror的其他程序选项。

ndb_perror replaces perror --ndb, which is deprecated as of NDB 7.6.4 and subject to removal in a future release of MySQL NDB Cluster. To make substitution easier in scripts and other applications that might depend on perror for obtaining NDB error information, ndb_perror supports its own dummy --ndb option, which does nothing.

ndb_u perror取代了perror--ndb,后者从ndb 7.6.4开始就被弃用,在mysql ndb cluster的未来版本中可能会被删除。为了在脚本和其他可能依赖perror获取ndb错误信息的应用程序中更容易进行替换,ndb_perror支持自己的“dummy”--ndb选项,该选项什么也不做。

The following table includes all options that are specific to the NDB Cluster program ndb_perror. Additional descriptions follow the table.

下表包括特定于ndb集群程序ndb_perror的所有选项。其他说明见下表。

Table 21.330 Command-line options for the ndb_perror program

表21.330 ndb_perror程序的命令行选项

Format Description Added, Deprecated, or Removed

--help,

--救命啊,

-?

-是吗?

Display help text

ADDED: NDB 7.6.4

增加:NDB 7.6.4

--ndb

--国家开发银行

For compatibility with applications depending on old versions of perror; does nothing

ADDED: NDB 7.6.4

增加:NDB 7.6.4

--silent,

--沉默,

-s

-S公司

Show error message only

ADDED: NDB 7.6.4

增加:NDB 7.6.4

--version,

--版本,

-V

-五

Print program version information and exit

ADDED: NDB 7.6.4

增加:NDB 7.6.4

--verbose,

--冗长,

-v

-五

Verbose output; disable with --silent

ADDED: NDB 7.6.4

增加:NDB 7.6.4


Additional Options

  • --help, -?

    --救命啊,-?

    Property Value
    Command-Line Format --help
    Introduced 5.7.19-ndb-7.6.4
    Type Boolean
    Default Value TRUE

    Display program help text and exit.

    显示程序帮助文本并退出。

  • --ndb

    --国家开发银行

    Property Value
    Command-Line Format --ndb
    Introduced 5.7.19-ndb-7.6.4
    Type Boolean
    Default Value TRUE

    For compatibility with applications depending on old versions of perror that use that program's --ndb option. The option when used with ndb_perror does nothing, and is ignored by it.

    为了与使用该程序的--ndb选项的旧版本Perror的应用程序兼容。与ndb_perror一起使用时,该选项不会执行任何操作,并且会被忽略。

  • --silent, -s

    --安静,-s

    Property Value
    Command-Line Format --silent
    Introduced 5.7.19-ndb-7.6.4
    Type Boolean
    Default Value TRUE

    Show error message only.

    仅显示错误消息。

  • --version, -V

    --版本,-v

    Property Value
    Command-Line Format --version
    Introduced 5.7.19-ndb-7.6.4
    Type Boolean
    Default Value TRUE

    Print program version information and exit.

    打印程序版本信息并退出。

  • --verbose, -v

    --冗长,-v

    Property Value
    Command-Line Format --verbose
    Introduced 5.7.19-ndb-7.6.4
    Type Boolean
    Default Value TRUE

    Verbose output; disable with --silent.

    详细输出;用--silent禁用。

21.4.18 ndb_print_backup_file — Print NDB Backup File Contents

ndb_print_backup_file obtains diagnostic information from a cluster backup file.

ndb_print_backup_文件从群集备份文件获取诊断信息。

Usage

ndb_print_backup_file file_name

file_name is the name of a cluster backup file. This can be any of the files (.Data, .ctl, or .log file) found in a cluster backup directory. These files are found in the data node's backup directory under the subdirectory BACKUP-#, where # is the sequence number for the backup. For more information about cluster backup files and their contents, see Section 21.5.3.1, “NDB Cluster Backup Concepts”.

文件名是群集备份文件的名称。这可以是在群集备份目录中找到的任何文件(.data、.ctl或.log文件)。这些文件位于数据节点的备份目录下的子目录backup-,其中是备份的序列号。有关群集备份文件及其内容的详细信息,请参阅21.5.3.1节“ndb群集备份概念”。

Like ndb_print_schema_file and ndb_print_sys_file (and unlike most of the other NDB utilities that are intended to be run on a management server host or to connect to a management server) ndb_print_backup_file must be run on a cluster data node, since it accesses the data node file system directly. Because it does not make use of the management server, this utility can be used when the management server is not running, and even when the cluster has been completely shut down.

与ndb_print_schema_file和ndb_print_sys_file一样(与大多数打算在管理服务器主机上运行或连接到管理服务器的其他ndb实用程序不同),ndb_print_backup_file必须在群集数据节点上运行,因为它直接访问数据节点文件系统。由于它不使用管理服务器,因此可以在管理服务器未运行时使用此实用程序,甚至在群集已完全关闭时也可以使用此实用程序。

Additional Options

None.

没有。

21.4.19 ndb_print_file — Print NDB Disk Data File Contents

ndb_print_file obtains information from an NDB Cluster Disk Data file.

ndb_print_文件从ndb群集磁盘数据文件获取信息。

Usage

ndb_print_file [-v] [-q] file_name+

file_name is the name of an NDB Cluster Disk Data file. Multiple filenames are accepted, separated by spaces.

file_name是ndb群集磁盘数据文件的名称。接受多个文件名,用空格分隔。

Like ndb_print_schema_file and ndb_print_sys_file (and unlike most of the other NDB utilities that are intended to be run on a management server host or to connect to a management server) ndb_print_file must be run on an NDB Cluster data node, since it accesses the data node file system directly. Because it does not make use of the management server, this utility can be used when the management server is not running, and even when the cluster has been completely shut down.

与ndb_print_schema_file和ndb_print_sys_file一样(与大多数其他ndb实用程序不同,ndb_print_实用程序打算在管理服务器主机上运行或连接到管理服务器),ndb_print_file必须在ndb群集数据节点上运行,因为它直接访问数据节点文件系统。由于它不使用管理服务器,因此可以在管理服务器未运行时使用此实用程序,甚至在群集已完全关闭时也可以使用此实用程序。

Additional Options

ndb_print_file supports the following options:

ndb_print_文件支持以下选项:

  • -v: Make output verbose.

    -v:使输出详细。

  • -q: Suppress output (quiet mode).

    -Q:抑制输出(安静模式)。

  • --help, -h, -?: Print help message.

    --救命,-h,-?:打印帮助消息。

For more information, see Section 21.5.13, “NDB Cluster Disk Data Tables”.

有关更多信息,请参阅21.5.13节“ndb群集磁盘数据表”。

21.4.20 ndb_print_frag_file — Print NDB Fragment List File Contents

ndb_print_frag_file obtains information from a cluster fragment list file. It is intended for use in helping to diagnose issues with data node restarts.

ndb_print_frag_文件从集群片段列表文件获取信息。它用于帮助诊断数据节点重新启动时出现的问题。

Usage

ndb_print_frag_file file_name

file_name is the name of a cluster fragment list file, which matches the pattern SX.FragList, where X is a digit in the range 2-9 inclusive, and are found in the data node file system of the data node having the node ID nodeid, in directories named ndb_nodeid_fs/DN/DBDIH/, where N is 1 or 2. Each fragment file contains records of the fragments belonging to each NDB table. For more information about cluster fragment files, see NDB Cluster Data Node File System Directory Files.

file_name是集群片段列表文件的名称,它与模式sx.fraglist匹配,其中x是2-9(含)范围内的数字,在具有节点id nodeid的数据节点的数据节点文件系统中,在名为ndb_nodeid_fs/dn/dbdih/的目录中找到,其中n是1或2。每个片段文件都包含属于每个ndb表的片段的记录。有关群集片段文件的详细信息,请参见ndb cluster data node file system directory files。

Like ndb_print_backup_file, ndb_print_sys_file, and ndb_print_schema_file (and unlike most of the other NDB utilities that are intended to be run on a management server host or to connect to a management server), ndb_print_frag_file must be run on a cluster data node, since it accesses the data node file system directly. Because it does not make use of the management server, this utility can be used when the management server is not running, and even when the cluster has been completely shut down.

与ndb_print_backup_file、ndb_print_sys_file和ndb_print_schema_file一样(与大多数打算在管理服务器主机上运行或连接到管理服务器的其他ndb实用程序不同),ndb_print_frag_file必须在群集数据节点上运行,因为它直接访问数据节点文件系统。由于它不使用管理服务器,因此可以在管理服务器未运行时使用此实用程序,甚至在群集已完全关闭时也可以使用此实用程序。

Additional Options

None.

没有。

Sample Output

shell> ndb_print_frag_file /usr/local/mysqld/data/ndb_3_fs/D1/DBDIH/S2.FragList
Filename: /usr/local/mysqld/data/ndb_3_fs/D1/DBDIH/S2.FragList with size 8192
noOfPages = 1 noOfWords = 182
Table Data
----------
Num Frags: 2 NoOfReplicas: 2 hashpointer: 4294967040
kvalue: 6 mask: 0x00000000 method: HashMap
Storage is on Logged and checkpointed, survives SR
------ Fragment with FragId: 0 --------
Preferred Primary: 2 numStoredReplicas: 2 numOldStoredReplicas: 0 distKey: 0 LogPartId: 0
-------Stored Replica----------
Replica node is: 2 initialGci: 2 numCrashedReplicas = 0 nextLcpNo = 1
LcpNo[0]: maxGciCompleted: 1 maxGciStarted: 2 lcpId: 1 lcpStatus: valid
LcpNo[1]: maxGciCompleted: 0 maxGciStarted: 0 lcpId: 0 lcpStatus: invalid
-------Stored Replica----------
Replica node is: 3 initialGci: 2 numCrashedReplicas = 0 nextLcpNo = 1
LcpNo[0]: maxGciCompleted: 1 maxGciStarted: 2 lcpId: 1 lcpStatus: valid
LcpNo[1]: maxGciCompleted: 0 maxGciStarted: 0 lcpId: 0 lcpStatus: invalid
------ Fragment with FragId: 1 --------
Preferred Primary: 3 numStoredReplicas: 2 numOldStoredReplicas: 0 distKey: 0 LogPartId: 1
-------Stored Replica----------
Replica node is: 3 initialGci: 2 numCrashedReplicas = 0 nextLcpNo = 1
LcpNo[0]: maxGciCompleted: 1 maxGciStarted: 2 lcpId: 1 lcpStatus: valid
LcpNo[1]: maxGciCompleted: 0 maxGciStarted: 0 lcpId: 0 lcpStatus: invalid
-------Stored Replica----------
Replica node is: 2 initialGci: 2 numCrashedReplicas = 0 nextLcpNo = 1
LcpNo[0]: maxGciCompleted: 1 maxGciStarted: 2 lcpId: 1 lcpStatus: valid
LcpNo[1]: maxGciCompleted: 0 maxGciStarted: 0 lcpId: 0 lcpStatus: invalid

21.4.21 ndb_print_schema_file — Print NDB Schema File Contents

ndb_print_schema_file obtains diagnostic information from a cluster schema file.

ndb_print_schema_文件从群集架构文件获取诊断信息。

Usage

ndb_print_schema_file file_name

file_name is the name of a cluster schema file. For more information about cluster schema files, see NDB Cluster Data Node File System Directory Files.

文件名是群集架构文件的名称。有关群集架构文件的详细信息,请参阅ndb cluster data node file system directory files。

Like ndb_print_backup_file and ndb_print_sys_file (and unlike most of the other NDB utilities that are intended to be run on a management server host or to connect to a management server) ndb_print_schema_file must be run on a cluster data node, since it accesses the data node file system directly. Because it does not make use of the management server, this utility can be used when the management server is not running, and even when the cluster has been completely shut down.

与ndb_print_backup_file和ndb_print_sys_file一样(与大多数打算在管理服务器主机上运行或连接到管理服务器的其他ndb实用程序不同),ndb_print_schema_file必须在群集数据节点上运行,因为它直接访问数据节点文件系统。由于它不使用管理服务器,因此可以在管理服务器未运行时使用此实用程序,甚至在群集已完全关闭时也可以使用此实用程序。

Additional Options

None.

没有。

21.4.22 ndb_print_sys_file — Print NDB System File Contents

ndb_print_sys_file obtains diagnostic information from an NDB Cluster system file.

ndb_print_sys_file从ndb集群系统文件获取诊断信息。

Usage

ndb_print_sys_file file_name

file_name is the name of a cluster system file (sysfile). Cluster system files are located in a data node's data directory (DataDir); the path under this directory to system files matches the pattern ndb_#_fs/D#/DBDIH/P#.sysfile. In each case, the # represents a number (not necessarily the same number). For more information, see NDB Cluster Data Node File System Directory Files.

file_name是群集系统文件(sysfile)的名称。群集系统文件位于数据节点的数据目录(datadir)中;该目录下指向系统文件的路径与模式ndb_u fs/d_/dbdih/p_.sysfile匹配。在每种情况下,表示一个数字(不一定是同一个数字)。有关详细信息,请参阅ndb cluster data node file system directory files。

Like ndb_print_backup_file and ndb_print_schema_file (and unlike most of the other NDB utilities that are intended to be run on a management server host or to connect to a management server) ndb_print_backup_file must be run on a cluster data node, since it accesses the data node file system directly. Because it does not make use of the management server, this utility can be used when the management server is not running, and even when the cluster has been completely shut down.

与ndb_print_backup_file和ndb_print_schema_file一样(与大多数打算在管理服务器主机上运行或连接到管理服务器的其他ndb实用程序不同),ndb_print_backup_file必须在群集数据节点上运行,因为它直接访问数据节点文件系统。由于它不使用管理服务器,因此可以在管理服务器未运行时使用此实用程序,甚至在群集已完全关闭时也可以使用此实用程序。

Additional Options

None.

没有。

21.4.23 ndb_redo_log_reader — Check and Print Content of Cluster Redo Log

Reads a redo log file, checking it for errors, printing its contents in a human-readable format, or both. ndb_redo_log_reader is intended for use primarily by NDB Cluster developers and Support personnel in debugging and diagnosing problems.

读取重做日志文件,检查错误,以可读格式打印其内容,或两者兼有。ndb重做日志阅读器主要供ndb集群开发人员和支持人员在调试和诊断问题时使用。

This utility remains under development, and its syntax and behavior are subject to change in future NDB Cluster releases.

此实用程序仍在开发中,其语法和行为在将来的ndb集群版本中可能会发生更改。

Note

Prior to NDB 7.2, this utility was named ndbd_redo_log_reader.

在ndb 7.2之前,这个实用程序被命名为ndbd_redo_log_reader。

The C++ source files for ndb_redo_log_reader can be found in the directory /storage/ndb/src/kernel/blocks/dblqh/redoLogReader.

NdByRedoLogLogyRead的C++源文件可以在目录/存储/ NDB/SRC/内核/块/DBLQH/RealOrthRead中找到。

The following table includes options that are specific to the NDB Cluster program ndb_redo_log_reader. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_redo_log_reader), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包括特定于ndb群集程序ndb_redo_log_reader的选项。其他说明见下表。有关大多数ndb群集程序(包括ndb重做日志读取器)的公用选项,请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.331 Command-line options for the ndb_redo_log_reader program

表21.331 ndb重做日志读取器程序的命令行选项

Format Description Added, Deprecated, or Removed

-dump

-倾倒

Print dump info

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-filedescriptors

-文件描述符

Print file descriptors only

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--help

--帮助

Print usage information

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-lap

-搭接

Provide lap info, with max GCI started and completed

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-mbyte #

-兆字节#

Starting megabyte

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-mbyteheaders

-兆字节头

Show only the first page header of every megabyte in the file

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-nocheck

-诺切克

Do not check records for errors

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-noprint

-不打印

Do not print records

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-page #

-第页#

Start with this page

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-pageheaders

-页眉

Show page headers only

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-pageindex #

-页面索引#

Start with this page index

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

-twiddle

-旋转

Bit-shifted dump

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


Usage

ndb_redo_log_reader file_name [options]

file_name is the name of a cluster redo log file. redo log files are located in the numbered directories under the data node's data directory (DataDir); the path under this directory to the redo log files matches the pattern ndb_nodeid_fs/D#/DBLQH/S#.FragLog. nodeid is the data node's node ID. The two instances of # each represent a number (not necessarily the same number); the number following D is in the range 8-39 inclusive; the range of the number following S varies according to the value of the NoOfFragmentLogFiles configuration parameter, whose default value is 16; thus, the default range of the number in the file name is 0-15 inclusive. For more information, see NDB Cluster Data Node File System Directory Files.

file_name是群集重做日志文件的名称。重做日志文件位于数据节点的数据目录(datadir)下的编号目录中;此目录下重做日志文件的路径与模式ndb_nodeid_fs/d/dblqh/s.fraglog匹配。node id是数据节点的节点id,的两个实例分别代表一个数字(不一定是同一个数字);d后面的数字在8-39之间(包括8-39);s后面的数字的范围根据noofframgentlogfiles配置参数的值而变化,其默认值为16;因此文件名中的数字为0-15(含0-15)。有关详细信息,请参阅ndb cluster data node file system directory files。

The name of the file to be read may be followed by one or more of the options listed here:

要读取的文件名后面可以跟一个或多个选项:

  • -dump

    -倾倒

    Property Value
    Command-Line Format -dump
    Type Boolean
    Default Value FALSE

    Print dump info.

    打印转储信息。

  • Property Value
    Command-Line Format -filedescriptors
    Type Boolean
    Default Value FALSE

    -filedescriptors: Print file descriptors only.

    -文件描述符:仅打印文件描述符。

  • Property Value
    Command-Line Format --help

    --help: Print usage information.

    --帮助:打印使用信息。

  • -lap

    -搭接

    Property Value
    Command-Line Format -lap
    Type Boolean
    Default Value FALSE

    Provide lap info, with max GCI started and completed.

    提供圈信息,最大GCI开始和完成。

  • Property Value
    Command-Line Format -mbyte #
    Type Numeric
    Default Value 0
    Minimum Value 0
    Maximum Value 15

    -mbyte #: Starting megabyte.

    -兆字节:起始兆字节。

    # is an integer in the range 0 to 15, inclusive.

    #是介于0到15之间的整数。

  • Property Value
    Command-Line Format -mbyteheaders
    Type Boolean
    Default Value FALSE

    -mbyteheaders: Show only the first page header of every megabyte in the file.

    -mbytecheaders:仅显示文件中每兆字节的第一个页眉。

  • Property Value
    Command-Line Format -noprint
    Type Boolean
    Default Value FALSE

    -noprint: Do not print the contents of the log file.

    -不打印:不打印日志文件的内容。

  • Property Value
    Command-Line Format -nocheck
    Type Boolean
    Default Value FALSE

    -nocheck: Do not check the log file for errors.

    -nocheck:不要检查日志文件中的错误。

  • Property Value
    Command-Line Format -page #
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 31

    -page #: Start at this page.

    -第:从本页开始。

    # is an integer in the range 0 to 31, inclusive.

    #是一个介于0到31之间(含0和31)的整数。

  • Property Value
    Command-Line Format -pageheaders
    Type Boolean
    Default Value FALSE

    -pageheaders: Show page headers only.

    -页眉:仅显示页眉。

  • Property Value
    Command-Line Format -pageindex #
    Type Integer
    Default Value 12
    Minimum Value 12
    Maximum Value 8191

    -pageindex #: Start at this page index.

    -页面索引:从该页面索引开始。

    # is an integer between 12 and 8191, inclusive.

    #是介于12和8191之间的整数(包括12和8191)。

  • -twiddle

    -旋转

    Property Value
    Command-Line Format -twiddle
    Type Boolean
    Default Value FALSE

    Bit-shifted dump.

    位移位转储。

Like ndb_print_backup_file and ndb_print_schema_file (and unlike most of the NDB utilities that are intended to be run on a management server host or to connect to a management server) ndb_redo_log_reader must be run on a cluster data node, since it accesses the data node file system directly. Because it does not make use of the management server, this utility can be used when the management server is not running, and even when the cluster has been completely shut down.

与ndb_print_backup_file和ndb_print_schema_file类似(与大多数ndb实用程序不同,ndb实用程序旨在在管理服务器主机上运行或连接到管理服务器),ndb_redo_log_reader必须在群集数据节点上运行,因为它直接访问数据节点文件系统。由于它不使用管理服务器,因此可以在管理服务器未运行时使用此实用程序,甚至在群集已完全关闭时也可以使用此实用程序。

21.4.24 ndb_restore — Restore an NDB Cluster Backup

The NDB Cluster restoration program is implemented as a separate command-line utility ndb_restore, which can normally be found in the MySQL bin directory. This program reads the files created as a result of the backup and inserts the stored information into the database.

ndb集群恢复程序是作为一个单独的命令行实用程序ndb_restore实现的,通常可以在mysql bin目录中找到。这个程序读取备份后创建的文件,并将存储的信息插入数据库。

Note

Beginning with NDB 7.5.15 and 7.6.11, this program no longer prints NDBT_ProgramExit: ... when it finishes its run. Applications depending on this behavior should be modified accordingly when upgrading from earlier releases.

从NDB 7.5.15和7.611开始,这个程序不再打印NdBtx程序出口:…当它跑完之后。从早期版本升级时,应相应地修改依赖于此行为的应用程序。

ndb_restore must be executed once for each of the backup files that were created by the START BACKUP command used to create the backup (see Section 21.5.3.2, “Using The NDB Cluster Management Client to Create a Backup”). This is equal to the number of data nodes in the cluster at the time that the backup was created.

对于由用于创建备份的start backup命令创建的每个备份文件,都必须执行一次ndb_restore(请参阅21.5.3.2节,“使用ndb群集管理客户端创建备份”)。这等于创建备份时群集中的数据节点数。

Note

Before using ndb_restore, it is recommended that the cluster be running in single user mode, unless you are restoring multiple data nodes in parallel. See Section 21.5.8, “NDB Cluster Single User Mode”, for more information.

在使用ndb_restore之前,建议群集以单用户模式运行,除非您正在并行还原多个数据节点。有关更多信息,请参见第21.5.8节“ndb集群单用户模式”。

The following table includes options that are specific to the NDB Cluster native backup restoration program ndb_restore. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_restore), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb群集本机备份还原程序ndb_restore的选项。其他说明见下表。有关大多数ndb群集程序(包括ndb_restore)的公用选项,请参阅21.4.32节,“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.332 Command-line options for the ndb_restore program

表21.332 ndb_restore程序的命令行选项

Format Description Added, Deprecated, or Removed

--append

--追加

Append data to a tab-delimited file

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--backup_path=dir_name

--backup_path=目录名

Path to backup files directory

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--backupid=#,

--备份ID=,

-b

-乙

Restore from the backup with the given ID

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--connect,

--连接,

-c

-C类

Alias for --connectstring.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--disable-indexes

--禁用索引

Causes indexes from a backup to be ignored; may decrease time needed to restore data.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--dont-ignore-systab-0,

--不要忽略-systab-0,

-f

- F

Do not ignore system table during restore. Experimental only; not for production use

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--exclude-databases=db-list

--排除数据库=数据库列表

List of one or more databases to exclude (includes those not named)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--exclude-intermediate-sql-tables[=TRUE|FALSE]

--排除中间SQL表[=true false]

If TRUE (the default), do not restore any intermediate tables (having names prefixed with '#sql-') that were left over from copying ALTER TABLE operations.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--exclude-missing-columns

--排除缺少的列

Causes columns from the backup version of a table that are missing from the version of the table in the database to be ignored.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--exclude-missing-tables

--排除缺少的表

Causes tables from the backup that are missing from the database to be ignored.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--exclude-tables=table-list

--exclude tables=表列表

List of one or more tables to exclude (includes those in the same database that are not named); each table reference must include the database name

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--fields-enclosed-by=char

--由=char括起的字段

Fields are enclosed with the indicated character

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--fields-optionally-enclosed-by

--字段可选地由

Fields are optionally enclosed with the indicated character

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--fields-terminated-by=char

--以=char结尾的字段

Fields are terminated by the indicated character

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--hex

--十六进制

Print binary types in hexadecimal format

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--include-databases=db-list

--包含数据库=数据库列表

List of one or more databases to restore (excludes those not named)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--include-tables=table-list

--include tables=表列表

List of one or more tables to restore (excludes those in same database that are not named); each table reference must include the database name

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--lines-terminated-by=char

--以=char结尾的行

Lines are terminated by the indicated character

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--lossy-conversions,

--有损转换,

-L

-一

Allow lossy conversions of column values (type demotions or changes in sign) when restoring data from backup

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--no-binlog

--没有binlog

If a mysqld is connected and using binary logging, do not log the restored data

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--no-restore-disk-objects,

--没有还原磁盘对象,

-d

-丁

Do not restore objects relating to Disk Data

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--no-upgrade,

--没有升级,

-u

-U型

Do not upgrade array type for varsize attributes which do not already resize VAR data, and do not change column attributes

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--ndb-nodegroup-map=map,

--ndb nodegroup map=地图,

-z

-Z轴

Nodegroup map for NDBCLUSTER storage engine. Syntax: list of (source_nodegroup, destination_nodegroup)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--nodeid=#,

--节点ID=,

-n

-n个

ID of node where backup was taken

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--parallelism=#,

--并行度=,

-p

-第页

Number of parallel transactions to use while restoring data

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--preserve-trailing-spaces,

--保留尾随空格,

-P

-第页

Allow preservation of trailing spaces (including padding) when promoting fixed-width string types to variable-width types

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--print

--打印

Print metadata, data and log to stdout (equivalent to --print-meta --print-data --print-log)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--print-data

--打印数据

Print data to stdout

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--print-log

--打印日志

Print to stdout

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--print-meta

--打印元数据

Print metadata to stdout

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

print-sql-log

打印SQL日志

Write SQL log to stdout; default is FALSE

ADDED: NDB 7.5.4

增加:NDB 7.5.4

--progress-frequency=#

--进度频率=#

Print status of restoration each given number of seconds

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--promote-attributes,

--提升属性,

-A

-一个

Allow attributes to be promoted when restoring data from backup

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--rebuild-indexes

--重建索引

Causes multithreaded rebuilding of ordered indexes found in the backup. Number of threads used is determined by setting BuildIndexThreads parameter.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--restore-data,

--恢复数据,

-r

-右

Restore table data and logs into NDB Cluster using the NDB API

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--restore-epoch,

--恢复纪元,

-e

-E类

Restore epoch info into the status table. Convenient on a MySQL Cluster replication slave for starting replication. The row in mysql.ndb_apply_status with id 0 will be updated/inserted.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--restore-meta,

--还原meta,

-m

-米

Restore metadata to NDB Cluster using the NDB API

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--restore-privilege-tables

--还原特权表

Restore MySQL privilege tables that were previously moved to NDB.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--rewrite-database=olddb,newdb

--重写数据库=olddb,newdb

Restores to a database with a different name than the original

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--skip-broken-objects

--跳过断开的对象

Causes missing blob tables in the backup file to be ignored.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--skip-table-check,

--跳过表检查,

-s

-S公司

Skip table structure check during restoring of data

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--skip-unknown-objects

--跳过未知对象

Causes schema objects not recognized by ndb_restore to be ignored when restoring a backup made from a newer MySQL Cluster version to an older version.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--tab=dir_name,

--tab=目录名,

-T dir_name

-目录名

Creates a tab-separated .txt file for each table in the given path

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--verbose=#

--冗长的=#

Level of verbosity in output

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


Typical options for this utility are shown here:

此实用程序的典型选项如下所示:

ndb_restore [-c connection_string] -n node_id -b backup_id \
      [-m] -r --backup_path=/path/to/backup/files

Normally, when restoring from an NDB Cluster backup, ndb_restore requires at a minimum the --nodeid (short form: -n), --backupid (short form: -b), and --backup_path options. In addition, when ndb_restore is used to restore any tables containing unique indexes, you must include --disable-indexes or --rebuild-indexes. (Bug #57782, Bug #11764893)

通常,从ndb群集备份还原时,ndb_restore至少需要--nodeid(缩写:-n)、--backupid(缩写:-b)和--backup_path选项。此外,当使用ndb_restore还原包含唯一索引的任何表时,必须包括--disable indexes或--rebuild indexes。(错误57782,错误11764893)

The -c option is used to specify a connection string which tells ndb_restore where to locate the cluster management server (see Section 21.3.3.3, “NDB Cluster Connection Strings”). If this option is not used, then ndb_restore attempts to connect to a management server on localhost:1186. This utility acts as a cluster API node, and so requires a free connection slot to connect to the cluster management server. This means that there must be at least one [api] or [mysqld] section that can be used by it in the cluster config.ini file. It is a good idea to keep at least one empty [api] or [mysqld] section in config.ini that is not being used for a MySQL server or other application for this reason (see Section 21.3.3.7, “Defining SQL and Other API Nodes in an NDB Cluster”).

c选项用于指定一个连接字符串,该字符串告诉ndb_restore在何处定位群集管理服务器(请参阅第21.3.3.3节“ndb群集连接字符串”)。如果未使用此选项,则ndb_restore将尝试连接到localhost:1186上的管理服务器。此实用程序充当群集API节点,因此需要一个空闲连接“插槽”来连接到群集管理服务器。这意味着在cluster config.ini文件中必须至少有一个[api]或[mysqld]节可供其使用。在config.ini中至少保留一个空的[api]或[mysqld]部分是一个好主意,因为这个原因,这些部分不用于mysql服务器或其他应用程序(请参阅第21.3.3.7节,“在ndb集群中定义sql和其他api节点”)。

You can verify that ndb_restore is connected to the cluster by using the SHOW command in the ndb_mgm management client. You can also accomplish this from a system shell, as shown here:

您可以使用ndb_mgm管理客户端中的show命令验证ndb_restore是否已连接到群集。您还可以通过系统外壳来完成此任务,如下所示:

shell> ndb_mgm -e "SHOW"

More detailed information about all options used by ndb_restore can be found in the following list:

有关ndb_restore使用的所有选项的详细信息,请参见以下列表:

  • --append

    --追加

    Property Value
    Command-Line Format --append

    When used with the --tab and --print-data options, this causes the data to be appended to any existing files having the same names.

    当使用-Tab和-Read数据选项时,这会导致数据被附加到具有相同名称的任何现有文件中。

  • --backup_path=dir_name

    --backup_path=目录名

    Property Value
    Command-Line Format --backup-path=dir_name
    Type Directory name
    Default Value ./

    The path to the backup directory is required; this is supplied to ndb_restore using the --backup_path option, and must include the subdirectory corresponding to the ID backup of the backup to be restored. For example, if the data node's DataDir is /var/lib/mysql-cluster, then the backup directory is /var/lib/mysql-cluster/BACKUP, and the backup files for the backup with the ID 3 can be found in /var/lib/mysql-cluster/BACKUP/BACKUP-3. The path may be absolute or relative to the directory in which the ndb_restore executable is located, and may be optionally prefixed with backup_path=.

    备份目录的路径是必需的;它是使用--backup_path选项提供给ndb_restore的,并且必须包含与要还原的备份的id backup相对应的子目录。例如,如果数据节点的datadir是/var/lib/mysql cluster,那么备份目录是/var/lib/mysql cluster/backup,id为3的备份文件可以在/var/lib/mysql cluster/backup/backup-3中找到。路径可以是绝对路径,也可以是相对于ndb_restore可执行文件所在目录的路径,并且可以选择以backup_path=作为前缀。

    It is possible to restore a backup to a database with a different configuration than it was created from. For example, suppose that a backup with backup ID 12, created in a cluster with two storage nodes having the node IDs 2 and 3, is to be restored to a cluster with four nodes. Then ndb_restore must be run twice—once for each storage node in the cluster where the backup was taken. However, ndb_restore cannot always restore backups made from a cluster running one version of MySQL to a cluster running a different MySQL version. See Section 21.2.9, “Upgrading and Downgrading NDB Cluster”, for more information.

    可以将备份还原到具有不同于从中创建的配置的数据库。例如,假设在具有两个存储节点(节点ID为2和3)的群集中创建的备份ID为12的备份将还原到具有四个节点的群集中。然后,必须为执行备份的群集中的每个存储节点运行两次ndb_restore。但是,ndb_restore不能总是将运行一个mysql版本的集群所做的备份还原到运行另一个mysql版本的集群。有关详细信息,请参阅第21.2.9节“ndb集群的升级和降级”。

    Important

    It is not possible to restore a backup made from a newer version of NDB Cluster using an older version of ndb_restore. You can restore a backup made from a newer version of MySQL to an older cluster, but you must use a copy of ndb_restore from the newer NDB Cluster version to do so.

    无法使用旧版本的ndb\u restore从新版本的ndb群集还原备份。可以将从较新版本的mysql创建的备份还原到较旧的群集,但必须使用较新版本的ndb群集的ndb_restore副本才能执行此操作。

    For example, to restore a cluster backup taken from a cluster running NDB Cluster 7.5.16 to a cluster running NDB Cluster 7.4.26, you must use the ndb_restore that comes with the NDB Cluster 7.5.16 distribution.

    例如,要将从运行ndb cluster 7.5.16的群集获取的群集备份还原到运行ndb cluster 7.4.26的群集,必须使用ndb cluster 7.5.16发行版附带的ndb_restore。

    For more rapid restoration, the data may be restored in parallel, provided that there is a sufficient number of cluster connections available. That is, when restoring to multiple nodes in parallel, you must have an [api] or [mysqld] section in the cluster config.ini file available for each concurrent ndb_restore process. However, the data files must always be applied before the logs.

    为了更快速地恢复,可以并行地恢复数据,前提是有足够数量的可用群集连接。也就是说,当并行还原到多个节点时,cluster config.ini文件中必须有一个[api]或[mysqld]节可用于每个并发ndb_还原进程。但是,数据文件必须始终在日志之前应用。

  • --backupid=#, -b

    --备份ID=,-B

    Property Value
    Command-Line Format --backupid=#
    Type Numeric
    Default Value none

    This option is used to specify the ID or sequence number of the backup, and is the same number shown by the management client in the Backup backup_id completed message displayed upon completion of a backup. (See Section 21.5.3.2, “Using The NDB Cluster Management Client to Create a Backup”.)

    此选项用于指定备份的ID或序列号,与管理客户端在备份完成时显示的备份ID完成消息中显示的编号相同。(请参阅第21.5.3.2节,“使用ndb群集管理客户端创建备份”。)

    Important

    When restoring cluster backups, you must be sure to restore all data nodes from backups having the same backup ID. Using files from different backups will at best result in restoring the cluster to an inconsistent state, and may fail altogether.

    还原群集备份时,必须确保从具有相同备份ID的备份中还原所有数据节点。使用不同备份中的文件最多会导致将群集还原到不一致的状态,并且可能会完全失败。

    In NDB 7.5.13 and later, and in NDB 7.6.9 and later, this option is required.

    在ndb 7.5.13及更高版本、ndb7.6.9及更高版本中,需要此选项。

  • --connect, -c

    --连接,-c

    Property Value
    Command-Line Format --connect
    Type String
    Default Value localhost:1186

    Alias for --ndb-connectstring.

    别名--ndb connectstring。

  • --disable-indexes

    --禁用索引

    Property Value
    Command-Line Format --disable-indexes

    Disable restoration of indexes during restoration of the data from a native NDB backup. Afterwards, you can restore indexes for all tables at once with multithreaded building of indexes using --rebuild-indexes, which should be faster than rebuilding indexes concurrently for very large tables.

    在从本机ndb备份还原数据期间禁用索引还原。之后,您可以使用--rebuild indexes创建多线程索引,同时恢复所有表的索引,这应该比同时为非常大的表重建索引快。

  • --dont-ignore-systab-0, -f

    --不忽略-systab-0,-f

    Property Value
    Command-Line Format --dont-ignore-systab-0

    Normally, when restoring table data and metadata, ndb_restore ignores the copy of the NDB system table that is present in the backup. --dont-ignore-systab-0 causes the system table to be restored. This option is intended for experimental and development use only, and is not recommended in a production environment.

    通常,在还原表数据和元数据时,ndb_u restore会忽略备份中存在的ndb系统表的副本。--dont-ignore-systab-0会导致系统表被还原。此选项仅供实验和开发使用,不建议在生产环境中使用。

  • --exclude-databases=db-list

    --排除数据库=数据库列表

    Property Value
    Command-Line Format --exclude-databases=db-list
    Type String
    Default Value

    Comma-delimited list of one or more databases which should not be restored.

    不应还原的一个或多个数据库的逗号分隔列表。

    This option is often used in combination with --exclude-tables; see that option's description for further information and examples.

    此选项通常与--exclude tables结合使用;有关详细信息和示例,请参阅该选项的说明。

  • --exclude-intermediate-sql-tables[=TRUE|FALSE]

    --排除中间SQL表[=true false]

    Property Value
    Command-Line Format --exclude-intermediate-sql-tables[=TRUE|FALSE]
    Type Boolean
    Default Value TRUE

    When performing copying ALTER TABLE operations, mysqld creates intermediate tables (whose names are prefixed with #sql-). When TRUE, the --exclude-intermediate-sql-tables option keeps ndb_restore from restoring such tables that may have been left over from these operations. This option is TRUE by default.

    在执行复制alter table操作时,mysqld会创建中间表(其名称的前缀是sql-)。如果为true,--exclude intermediate sql tables选项使ndb_restore无法还原这些操作可能遗留下来的表。默认情况下,此选项为true。

  • --exclude-missing-columns

    --排除缺少的列

    Property Value
    Command-Line Format --exclude-missing-columns

    It is possible to restore only selected table columns using this option, which causes ndb_restore to ignore any columns missing from tables being restored as compared to the versions of those tables found in the backup. This option applies to all tables being restored. If you wish to apply this option only to selected tables or databases, you can use it in combination with one or more of the --include-* or --exclude-* options described elsewhere in this section to do so, then restore data to the remaining tables using a complementary set of these options.

    使用此选项可以仅还原选定的表列,这将导致ndb_restore忽略正在还原的表中丢失的任何列,而与在备份中找到的那些表的版本相比。此选项适用于正在还原的所有表。如果希望仅将此选项应用于选定的表或数据库,则可以将其与本节其他部分介绍的一个或多个--include-*或--exclude-*选项结合使用,然后使用这些选项的补充集将数据还原到其余表。

  • --exclude-missing-tables

    --排除缺少的表

    Property Value
    Command-Line Format --exclude-missing-tables

    It is possible to restore only selected tables using this option, which causes ndb_restore to ignore any tables from the backup that are not found in the target database.

    使用此选项可以仅还原选定的表,这将导致ndb_restore忽略目标数据库中未找到的备份中的任何表。

  • --exclude-tables=table-list

    --exclude tables=表列表

    Property Value
    Command-Line Format --exclude-tables=table-list
    Type String
    Default Value

    List of one or more tables to exclude; each table reference must include the database name. Often used together with --exclude-databases.

    要排除的一个或多个表的列表;每个表引用必须包含数据库名称。通常与--排除数据库一起使用。

    When --exclude-databases or --exclude-tables is used, only those databases or tables named by the option are excluded; all other databases and tables are restored by ndb_restore.

    当使用--exclude databases或--exclude tables时,仅排除由该选项命名的那些数据库或表;所有其他数据库和表都由ndb_restore还原。

    This table shows several invocations of ndb_restore usng --exclude-* options (other options possibly required have been omitted for clarity), and the effects these options have on restoring from an NDB Cluster backup:

    此表显示了多次调用ndb_restore usng--exclude-*选项(为清楚起见,可能需要的其他选项已省略),以及这些选项对从ndb群集备份还原的影响:

    Table 21.333 Several invocations of ndb_restore using --exclude-* options, and the effects these options have on restoring from an NDB Cluster backup.

    表21.333使用--exclude-*选项多次调用ndb_u restore,以及这些选项对从anndb cluster backup还原的影响。

    Option Result
    --exclude-databases=db1 All tables in all databases except db1 are restored; no tables in db1 are restored
    --exclude-databases=db1,db2 (or --exclude-databases=db1 --exclude-databases=db2) All tables in all databases except db1 and db2 are restored; no tables in db1 or db2 are restored
    --exclude-tables=db1.t1 All tables except t1 in database db1 are restored; all other tables in db1 are restored; all tables in all other databases are restored
    --exclude-tables=db1.t2,db2.t1 (or --exclude-tables=db1.t2 --exclude-tables=db2.t1) All tables in database db1 except for t2 and all tables in database db2 except for table t1 are restored; no other tables in db1 or db2 are restored; all tables in all other databases are restored

    You can use these two options together. For example, the following causes all tables in all databases except for databases db1 and db2, and tables t1 and t2 in database db3, to be restored:

    你可以同时使用这两个选项。例如,以下操作将还原除数据库db1和db2以外的所有数据库中的所有表,以及数据库db3中的表t1和t2:

    shell> ndb_restore [...] --exclude-databases=db1,db2 --exclude-tables=db3.t1,db3.t2
    

    (Again, we have omitted other possibly necessary options in the interest of clarity and brevity from the example just shown.)

    (同样,为了简洁明了,我们从刚才的示例中省略了其他可能必要的选项。)

    You can use --include-* and --exclude-* options together, subject to the following rules:

    您可以同时使用--include-*和--exclude-*选项,但必须遵守以下规则:

    • The actions of all --include-* and --exclude-* options are cumulative.

      所有--include-*和--exclude-*选项的操作都是累积的。

    • All --include-* and --exclude-* options are evaluated in the order passed to ndb_restore, from right to left.

      所有--include-*和--exclude-*选项按从右到左传递给ndb_restore的顺序计算。

    • In the event of conflicting options, the first (rightmost) option takes precedence. In other words, the first option (going from right to left) that matches against a given database or table wins.

      在选项冲突的情况下,第一个(最右边的)选项优先。换句话说,与给定数据库或表“wins”匹配的第一个选项(从右向左)。

    For example, the following set of options causes ndb_restore to restore all tables from database db1 except db1.t1, while restoring no other tables from any other databases:

    例如,以下一组选项会导致ndb_u restore从数据库db1(db1.t1除外)还原所有表,而不会从任何其他数据库还原任何其他表:

    --include-databases=db1 --exclude-tables=db1.t1
    

    However, reversing the order of the options just given simply causes all tables from database db1 to be restored (including db1.t1, but no tables from any other database), because the --include-databases option, being farthest to the right, is the first match against database db1 and thus takes precedence over any other option that matches db1 or any tables in db1:

    但是,颠倒刚刚给出的选项的顺序只会导致数据库db1中的所有表都被还原(包括db1.t1,但没有来自任何其他数据库的表),因为--include databases选项最右边是与数据库db1的第一个匹配,因此优先于与db1或db1中的任何表:

    --exclude-tables=db1.t1 --include-databases=db1
    
  • --fields-enclosed-by=char

    --由=char括起的字段

    Property Value
    Command-Line Format --fields-enclosed-by=char
    Type String
    Default Value

    Each column value is enclosed by the string passed to this option (regardless of data type; see the description of --fields-optionally-enclosed-by).

    每个列值都由传递给此选项的字符串括起(无论数据类型如何;请参阅可选括起的--fields的说明)。

  • --fields-optionally-enclosed-by

    --字段可选地由

    Property Value
    Command-Line Format --fields-optionally-enclosed-by
    Type String
    Default Value

    The string passed to this option is used to enclose column values containing character data (such as CHAR, VARCHAR, BINARY, TEXT, or ENUM).

    传递给此选项的字符串用于将包含字符数据(例如char、varchar、binary、text或enum)的列值括起来。

  • --fields-terminated-by=char

    --以=char结尾的字段

    Property Value
    Command-Line Format --fields-terminated-by=char
    Type String
    Default Value \t (tab)

    The string passed to this option is used to separate column values. The default value is a tab character (\t).

    传递给此选项的字符串用于分隔列值。默认值是制表符(\t)。

  • --hex

    --十六进制

    Property Value
    Command-Line Format --hex

    If this option is used, all binary values are output in hexadecimal format.

    如果使用此选项,则所有二进制值都将以十六进制格式输出。

  • --include-databases=db-list

    --包含数据库=数据库列表

    Property Value
    Command-Line Format --include-databases=db-list
    Type String
    Default Value

    Comma-delimited list of one or more databases to restore. Often used together with --include-tables; see the description of that option for further information and examples.

    要还原的一个或多个数据库的逗号分隔列表。通常与--include tables一起使用;有关更多信息和示例,请参见该选项的说明。

  • --include-tables=table-list

    --include tables=表列表

    Property Value
    Command-Line Format --include-tables=table-list
    Type String
    Default Value

    Comma-delimited list of tables to restore; each table reference must include the database name.

    要还原的表的逗号分隔列表;每个表引用必须包含数据库名称。

    When --include-databases or --include-tables is used, only those databases or tables named by the option are restored; all other databases and tables are excluded by ndb_restore, and are not restored.

    使用--include databases或--include tables时,仅还原由该选项命名的那些数据库或表;ndb_restore排除所有其他数据库和表,并且不还原这些数据库和表。

    The following table shows several invocations of ndb_restore using --include-* options (other options possibly required have been omitted for clarity), and the effects these have on restoring from an NDB Cluster backup:

    下表显示了使用--include-*选项(为清楚起见,可能需要的其他选项已被省略)对ndb_restore的多次调用,以及这些调用对从ndb群集备份还原的影响:

    Table 21.334 Several invocations of ndb_restore using --include-* options, and their effects on restoring from an NDB Cluster backup.

    表21.334多次使用--include-*选项调用ndb_restore,以及它们对从ndb clusterbackup还原的影响。

    Option Result
    --include-databases=db1 Only tables in database db1 are restored; all tables in all other databases are ignored
    --include-databases=db1,db2 (or --include-databases=db1 --include-databases=db2) Only tables in databases db1 and db2 are restored; all tables in all other databases are ignored
    --include-tables=db1.t1 Only table t1 in database db1 is restored; no other tables in db1 or in any other database are restored
    --include-tables=db1.t2,db2.t1 (or --include-tables=db1.t2 --include-tables=db2.t1) Only the table t2 in database db1 and the table t1 in database db2 are restored; no other tables in db1, db2, or any other database are restored

    You can also use these two options together. For example, the following causes all tables in databases db1 and db2, together with the tables t1 and t2 in database db3, to be restored (and no other databases or tables):

    您也可以同时使用这两个选项。例如,以下原因导致数据库db1和db2中的所有表以及数据库db3中的表t1和t2被还原(没有其他数据库或表):

    shell> ndb_restore [...] --include-databases=db1,db2 --include-tables=db3.t1,db3.t2
    

    (Again we have omitted other, possibly required, options in the example just shown.)

    (同样,我们省略了刚刚显示的示例中的其他(可能是必需的)选项。)

    It also possible to restore only selected databases, or selected tables from a single database, without any --include-* (or --exclude-*) options, using the syntax shown here:

    也可以使用以下语法,仅还原选定的数据库或单个数据库中的选定表,而不使用任何--include-*(或--exclude-*)选项:

    ndb_restore other_options db_name,[db_name[,...] | tbl_name[,tbl_name][,...]]
    

    In other words, you can specify either of the following to be restored:

    换句话说,您可以指定要还原的以下任一项:

    • All tables from one or more databases

      来自一个或多个数据库的所有表

    • One or more tables from a single database

      单个数据库中的一个或多个表

  • --lines-terminated-by=char

    --以=char结尾的行

    Property Value
    Command-Line Format --lines-terminated-by=char
    Type String
    Default Value \n (linebreak)

    Specifies the string used to end each line of output. The default is a linefeed character (\n).

    指定用于结束每行输出的字符串。默认为换行符(\n)。

  • --lossy-conversions, -L

    --有损转换,-l

    Property Value
    Command-Line Format --lossy-conversions
    Type Boolean
    Default Value FALSE (If option is not used)

    This option is intended to complement the --promote-attributes option. Using --lossy-conversions allows lossy conversions of column values (type demotions or changes in sign) when restoring data from backup. With some exceptions, the rules governing demotion are the same as for MySQL replication; see Section 16.4.1.10.2, “Replication of Columns Having Different Data Types”, for information about specific type conversions currently supported by attribute demotion.

    此选项旨在补充--promote attributes选项。使用--有损转换允许在从备份还原数据时有损转换列值(类型降级或符号更改)。除某些例外,降级的规则与MySQL复制相同;有关属性降级当前支持的特定类型转换的信息,请参阅第16.4.1.10.2节“具有不同数据类型的列的复制”。

    ndb_restore reports any truncation of data that it performs during lossy conversions once per attribute and column.

    ndb_restore报告在有损转换期间对每个属性和列执行的任何数据截断。

  • --no-binlog

    --没有binlog

    Property Value
    Command-Line Format --no-binlog

    This option prevents any connected SQL nodes from writing data restored by ndb_restore to their binary logs.

    此选项可防止任何连接的SQL节点将由NDB_restore还原的数据写入其二进制日志。

  • --no-restore-disk-objects, -d

    --没有还原磁盘对象,-d

    Property Value
    Command-Line Format --no-restore-disk-objects
    Type Boolean
    Default Value FALSE

    This option stops ndb_restore from restoring any NDB Cluster Disk Data objects, such as tablespaces and log file groups; see Section 21.5.13, “NDB Cluster Disk Data Tables”, for more information about these.

    此选项阻止ndb_u restore还原任何ndb群集磁盘数据对象,如表空间和日志文件组;有关这些对象的详细信息,请参阅21.5.13节“ndb群集磁盘数据表”。

  • --no-upgrade, -u

    --不升级,-u

    Property Value
    Command-Line Format --no-upgrade

    When using ndb_restore to restore a backup, VARCHAR columns created using the old fixed format are resized and recreated using the variable-width format now employed. This behavior can be overridden by specifying --no-upgrade.

    使用ndb_restore还原备份时,使用旧的固定格式创建的varchar列将调整大小并使用现在使用的可变宽度格式重新创建。可以通过指定--no upgrade覆盖此行为。

  • --ndb-nodegroup-map=map, -z

    --ndb nodegroup map=映射,-z

    Property Value
    Command-Line Format --ndb-nodegroup-map=map

    This option can be used to restore a backup taken from one node group to a different node group. Its argument is a list of the form source_node_group, target_node_group.

    此选项可用于将从一个节点组获取的备份还原到另一个节点组。它的参数是源节点组、目标节点组的列表。

  • --nodeid=#, -n

    --节点ID=35;,-n

    Property Value
    Command-Line Format --nodeid=#
    Type Numeric
    Default Value none

    Specify the node ID of the data node on which the backup was taken.

    指定执行备份的数据节点的节点ID。

    When restoring to a cluster with different number of data nodes from that where the backup was taken, this information helps identify the correct set or sets of files to be restored to a given node. (In such cases, multiple files usually need to be restored to a single data node.) See Section 21.4.24.1, “Restoring to a different number of data nodes”, for additional information and examples.

    当还原到具有与执行备份的数据节点数量不同的数据节点的群集时,此信息有助于标识要还原到给定节点的正确文件集。(在这种情况下,通常需要将多个文件还原到单个数据节点。)有关更多信息和示例,请参见第21.4.24.1节“还原到不同数量的数据节点”。

    In NDB 7.5.13 and later, and in NDB 7.6.9 and later, this option is required.

    在ndb 7.5.13及更高版本、ndb7.6.9及更高版本中,需要此选项。

  • --parallelism=#, -p

    --并行度=,-p

    Property Value
    Command-Line Format --parallelism=#
    Type Numeric
    Default Value 128
    Minimum Value 1
    Maximum Value 1024

    ndb_restore uses single-row transactions to apply many rows concurrently. This parameter determines the number of parallel transactions (concurrent rows) that an instance of ndb_restore tries to use. By default, this is 128; the minimum is 1, and the maximum is 1024.

    ndb_restore使用单行事务同时应用多行。此参数确定ndb_restore实例尝试使用的并行事务(并发行)数。默认情况下,这是128,最小值是1,最大值是1024。

    The work of performing the inserts is parallelized across the threads in the data nodes involved. This mechanism is employed for restoring bulk data from the .Data file—that is, the fuzzy snapshot of the data; it is not used for building or rebuilding indexes. The change log is applied serially; index drops and builds are DDL operations and handled separately. There is no thread-level parallelism on the client side of the restore.

    执行插入的工作在涉及的数据节点中的线程之间并行化。此机制用于从.data文件(即数据的模糊快照)还原大容量数据;它不用于生成或重建索引。更改日志是串行应用的;索引删除和生成是DDL操作,分别处理。还原的客户端上没有线程级并行。

  • --preserve-trailing-spaces, -P

    --保留尾随空格,-p

    Property Value
    Command-Line Format --preserve-trailing-spaces

    Cause trailing spaces to be preserved when promoting a fixed-width character data type to its variable-width equivalent—that is, when promoting a CHAR column value to VARCHAR, or a BINARY column value to VARBINARY. Otherwise, any trailing spaces are dropped from such column values when they are inserted into the new columns.

    使在将固定宽度字符数据类型提升为其可变宽度等效类型(即将char列值提升为varchar或将二进制列值提升为varbinary)时保留尾随空格。否则,当这些列值插入到新列中时,将从这些列值中删除任何尾随空格。

    Note

    Although you can promote CHAR columns to VARCHAR and BINARY columns to VARBINARY, you cannot promote VARCHAR columns to CHAR or VARBINARY columns to BINARY.

    尽管可以将char列提升为varchar,将binary列提升为varbinary,但不能将varchar列提升为char,或将varbinary列提升为binary。

  • --print

    --打印

    Property Value
    Command-Line Format --print
    Type Boolean
    Default Value FALSE

    Causes ndb_restore to print all data, metadata, and logs to stdout. Equivalent to using the --print-data, --print-meta, and --print-log options together.

    使ndb_restore将所有数据、元数据和日志打印到stdout。相当于同时使用--print data、--print meta和--print log选项。

    Note

    Use of --print or any of the --print_* options is in effect performing a dry run. Including one or more of these options causes any output to be redirected to stdout; in such cases, ndb_restore makes no attempt to restore data or metadata to an NDB Cluster.

    使用--print或任何--print\ux选项实际上是在执行干运行。包含这些选项中的一个或多个将导致任何输出重定向到stdout;在这种情况下,ndb_u restore不会尝试将数据或元数据还原到ndb集群。

  • --print-data

    --打印数据

    Property Value
    Command-Line Format --print-data
    Type Boolean
    Default Value FALSE

    Cause ndb_restore to direct its output to stdout. Often used together with one or more of --tab, --fields-enclosed-by, --fields-optionally-enclosed-by, --fields-terminated-by, --hex, and --append.

    使ndb_restore将其输出定向到stdout。通常与一个或多个--tab、--fields括起来,--fields可选括起来,--fields以,--hex结尾,和--append一起使用。

    TEXT and BLOB column values are always truncated. Such values are truncated to the first 256 bytes in the output. This cannot currently be overridden when using --print-data.

    文本和blob列值始终被截断。这些值被截断为输出中的前256个字节。当使用--print data时,当前无法覆盖此项。

  • --print-log

    --打印日志

    Property Value
    Command-Line Format --print-log
    Type Boolean
    Default Value FALSE

    Cause ndb_restore to output its log to stdout.

    使ndb_restore将其日志输出到stdout。

  • --print-meta

    --打印元数据

    Property Value
    Command-Line Format --print-meta
    Type Boolean
    Default Value FALSE

    Print all metadata to stdout.

    将所有元数据打印到标准输出。

  • print-sql-log

    打印SQL日志

    Property Value
    Command-Line Format --print-sql-log
    Introduced 5.7.16-ndb-7.5.4
    Type Boolean
    Default Value FALSE

    Log SQL statements to stdout. Use the option to enable; normally this behavior is disabled. The option checks before attempting to log whether all the tables being restored have explicitly defined primary keys; queries on a table having only the hidden primary key implemented by NDB cannot be converted to valid SQL.

    将sql语句记录到stdout。使用该选项可启用;通常此行为被禁用。该选项在尝试记录要还原的所有表是否都显式定义了主键之前进行检查;对只有由ndb实现的隐藏主键的表的查询不能转换为有效的sql。

    This option does not work with tables having BLOB columns.

    此选项不适用于具有blob列的表。

    The --print-sql-log option was added in NDB 7.5.4. (Bug #13511949)

    在ndb 7.5.4中添加了--print sql log选项。(错误13511949)

  • --progress-frequency=N

    --进度频率=N

    Property Value
    Command-Line Format --progress-frequency=#
    Type Numeric
    Default Value 0
    Minimum Value 0
    Maximum Value 65535

    Print a status report each N seconds while the backup is in progress. 0 (the default) causes no status reports to be printed. The maximum is 65535.

    备份过程中,每N秒打印一份状态报告。0(默认值)导致不打印状态报告。最大值为65535。

  • --promote-attributes, -A

    --提升属性,-a

    Property Value
    Command-Line Format --promote-attributes

    ndb_restore supports limited attribute promotion in much the same way that it is supported by MySQL replication; that is, data backed up from a column of a given type can generally be restored to a column using a larger, similar type. For example, data from a CHAR(20) column can be restored to a column declared as VARCHAR(20), VARCHAR(30), or CHAR(30); data from a MEDIUMINT column can be restored to a column of type INT or BIGINT. See Section 16.4.1.10.2, “Replication of Columns Having Different Data Types”, for a table of type conversions currently supported by attribute promotion.

    ndb_u restore支持有限的属性提升,这与mysql复制支持的方式非常相似;也就是说,从给定类型的列备份的数据通常可以使用“更大、类似”类型还原到列。例如,来自char(20)列的数据可以还原为声明为varchar(20)、varchar(30)或char(30)的列;来自mediumint列的数据可以还原为int或bigint类型的列。有关属性提升当前支持的类型转换表,请参阅第16.4.1.10.2节“具有不同数据类型的列的复制”。

    Attribute promotion by ndb_restore must be enabled explicitly, as follows:

    必须显式启用ndb_restore的属性提升,如下所示:

    1. Prepare the table to which the backup is to be restored. ndb_restore cannot be used to re-create the table with a different definition from the original; this means that you must either create the table manually, or alter the columns which you wish to promote using ALTER TABLE after restoring the table metadata but before restoring the data.

      准备要还原备份的表。ndb_restore不能用于使用与原始表不同的定义重新创建表;这意味着您必须手动创建表,或者在还原表元数据之后还原数据之前使用alter table更改要升级的列。

    2. Invoke ndb_restore with the --promote-attributes option (short form -A) when restoring the table data. Attribute promotion does not occur if this option is not used; instead, the restore operation fails with an error.

      还原表数据时,使用--promote attributes选项(缩写为-a)调用ndb_restore。如果不使用此选项,则不会进行属性提升;相反,还原操作会失败,并出现错误。

    When converting between character data types and TEXT or BLOB, only conversions between character types (CHAR and VARCHAR) and binary types (BINARY and VARBINARY) can be performed at the same time. For example, you cannot promote an INT column to BIGINT while promoting a VARCHAR column to TEXT in the same invocation of ndb_restore.

    在字符数据类型和文本或blob之间进行转换时,只能同时执行字符类型(char和varchar)和二进制类型(binary和varbinary)之间的转换。例如,在同一次调用ndb\u restore时,将varchar列提升为text时,不能将int列提升为bigint。

    Converting between TEXT columns using different character sets is not supported, and is expressly disallowed.

    不支持使用不同字符集在文本列之间进行转换,并且明确禁止。

    When performing conversions of character or binary types to TEXT or BLOB with ndb_restore, you may notice that it creates and uses one or more staging tables named table_name$STnode_id. These tables are not needed afterwards, and are normally deleted by ndb_restore following a successful restoration.

    在使用ndb_restore将字符或二进制类型转换为文本或blob时,您可能会注意到它创建并使用一个或多个名为table_name$stnode_id的临时表。这些表在之后不需要,通常在成功还原后由ndb_restore删除。

  • --rebuild-indexes

    --重建索引

    Property Value
    Command-Line Format --rebuild-indexes

    Enable multithreaded rebuilding of the ordered indexes while restoring a native NDB backup. The number of threads used for building ordered indexes by ndb_restore with this option is controlled by the BuildIndexThreads data node configuration parameter and the number of LDMs.

    在还原本机ndb备份时启用有序索引的多线程重建。使用此选项的ndb_restore用于生成有序索引的线程数由buildindexthreads数据节点配置参数和ldm数控制。

    It is necessary to use this option only for the first run of ndb_restore; this causes all ordered indexes to be rebuilt without using --rebuild-indexes again when restoring subsequent nodes. You should use this option prior to inserting new rows into the database; otherwise, it is possible for a row to be inserted that later causes a unique constraint violation when trying to rebuild the indexes.

    只有在第一次运行ndb_restore时才需要使用此选项;这会导致在还原后续节点时不使用--rebuild index而重新生成所有有序索引。在将新行插入数据库之前,应使用此选项;否则,可能会插入一行,该行在尝试重新生成索引时会导致唯一约束冲突。

    Building of ordered indices is parallelized with the number of LDMs by default. Offline index builds performed during node and system restarts can be made faster using the BuildIndexThreads data node configuration parameter; this parameter has no effect on dropping and rebuilding of indexes by ndb_restore, which is performed online.

    默认情况下,有序索引的构建与ldm的数量并行。使用buildIndexThreads数据节点配置参数可以加快节点和系统重新启动期间执行的脱机索引生成;此参数对通过联机执行的ndb_restore删除和重建索引没有影响。

    Rebuilding of unique indexes uses disk write bandwidth for redo logging and local checkpointing. An insufficient amount of this bandwith can lead to redo buffer overload or log overload errors. In such cases you can run ndb_restore --rebuild-indexes again; the process resumes at the point where the error occurred. You can also do this when you have encountered temporary errors. You can repeat execution of ndb_restore --rebuild-indexes indefinitely; you may be able to stop such errors by reducing the value of --parallelism. If the problem is insufficient space, you can increase the size of the redo log (FragmentLogFileSize node configuration parameter), or you can increase the speed at which LCPs are performed (MaxDiskWriteSpeed and related parameters), in order to free space more quickly.

    重建唯一索引使用磁盘写入带宽进行重做日志记录和本地检查点。此bandwith的数量不足可能导致重做缓冲区过载或日志过载错误。在这种情况下,您可以运行ndb_restore--再次重建索引;该过程将在发生错误的位置恢复。遇到临时错误时也可以执行此操作。可以无限期地重复执行ndb_restore--rebuild索引;可以通过减少--parallelism的值来停止此类错误。如果问题是空间不足,可以增大重做日志的大小(fragmentlogfilesize node configuration参数),或者提高执行LCP的速度(maxdiskwritespeed和相关参数),以便更快地释放空间。

  • --restore-data, -r

    --还原数据,-r

    Property Value
    Command-Line Format --restore-data
    Type Boolean
    Default Value FALSE

    Output NDB table data and logs.

    输出ndb表数据和日志。

  • --restore-epoch, -e

    --恢复纪元,-e

    Property Value
    Command-Line Format --restore-epoch

    Add (or restore) epoch information to the cluster replication status table. This is useful for starting replication on an NDB Cluster replication slave. When this option is used, the row in the mysql.ndb_apply_status having 0 in the id column is updated if it already exists; such a row is inserted if it does not already exist. (See Section 21.6.9, “NDB Cluster Backups With NDB Cluster Replication”.)

    向群集复制状态表中添加(或还原)epoch信息。这对于在ndb群集复制从机上启动复制非常有用。当使用此选项时,如果ID列已存在,则在ID列中具有0的MyQu.NdByAppyyStand中的行将被更新;如果尚未存在,则插入该行。(请参阅第21.6.9节“使用ndb群集复制的ndb群集备份”。)

  • --restore-meta, -m

    --还原meta,-m

    Property Value
    Command-Line Format --restore-meta
    Type Boolean
    Default Value FALSE

    This option causes ndb_restore to print NDB table metadata.

    此选项使ndb_restore打印ndb表元数据。

    The first time you run the ndb_restore restoration program, you also need to restore the metadata. In other words, you must re-create the database tables—this can be done by running it with the --restore-meta (-m) option. Restoring the metadata need be done only on a single data node; this is sufficient to restore it to the entire cluster.

    第一次运行ndb_restore restore程序时,还需要还原元数据。换句话说,必须重新创建数据库表,这可以通过使用--restore meta(-m)选项运行它来完成。只需在单个数据节点上还原元数据;这足以将其还原到整个集群。

    In older versions of NDB Cluster, tables whose schemas were restored using this option used the same number of partitions as they did on the original cluster, even if it had a differing number of data nodes from the new cluster. In NDB 7.5.2 and later, when restoring metadata, this is no longer an issue; ndb_restore now uses the default number of partitions for the target cluster, unless the number of local data manager threads is also changed from what it was for data nodes in the original cluster.

    在旧版本的ndb cluster中,使用此选项还原架构的表所使用的分区数与它们在原始群集上使用的分区数相同,即使它与新群集的数据节点数不同。在ndb 7.5.2及更高版本中,恢复元数据时,这不再是问题;ndb_u restore现在使用目标群集的默认分区数,除非本地数据管理器线程数也与原始群集中的数据节点数不同。

    Note

    The cluster should have an empty database when starting to restore a backup. (In other words, you should start the data nodes with --initial prior to performing the restore.)

    开始还原备份时,群集应具有空数据库。(换言之,在执行恢复之前,应该使用--initial启动数据节点。)

  • --restore-privilege-tables

    --还原特权表

    Property Value
    Command-Line Format --restore-privilege-tables
    Type Boolean
    Default Value FALSE (If option is not used)

    ndb_restore does not by default restore distributed MySQL privilege tables. This option causes ndb_restore to restore the privilege tables.

    默认情况下,ndb_restore不还原分布式mysql特权表。此选项使ndb_restore还原特权表。

    This works only if the privilege tables were converted to NDB before the backup was taken. For more information, see Section 21.5.16, “Distributed Privileges Using Shared Grant Tables”.

    只有在备份之前将特权表转换为ndb时,此操作才有效。有关详细信息,请参阅第21.5.16节“使用共享授权表的分布式权限”。

  • --rewrite-database=olddb,newdb

    --重写数据库=olddb,newdb

    Property Value
    Command-Line Format --rewrite-database=olddb,newdb
    Type String
    Default Value none

    This option makes it possible to restore to a database having a different name from that used in the backup. For example, if a backup is made of a database named products, you can restore the data it contains to a database named inventory, use this option as shown here (omitting any other options that might be required):

    此选项使您可以还原到与备份中使用的名称不同的数据库。例如,如果备份由名为products的数据库构成,则可以将其包含的数据还原到名为inventory的数据库中,使用此选项,如下所示(省略可能需要的任何其他选项):

    shell> ndb_restore --rewrite-database=product,inventory
    

    The option can be employed multiple times in a single invocation of ndb_restore. Thus it is possible to restore simultaneously from a database named db1 to a database named db2 and from a database named db3 to one named db4 using --rewrite-database=db1,db2 --rewrite-database=db3,db4. Other ndb_restore options may be used between multiple occurrences of --rewrite-database.

    在一次调用ndb_restore时,可以多次使用该选项。因此,可以使用--rewrite database=db1,db2--rewrite database=db3,db4同时从名为db1的数据库恢复到名为db2的数据库,并从名为db3的数据库恢复到名为db4的数据库。在多次出现--rewrite database之间可以使用其他ndb_restore选项。

    In the event of conflicts between multiple --rewrite-database options, the last --rewrite-database option used, reading from left to right, is the one that takes effect. For example, if --rewrite-database=db1,db2 --rewrite-database=db1,db3 is used, only --rewrite-database=db1,db3 is honored, and --rewrite-database=db1,db2 is ignored. It is also possible to restore from multiple databases to a single database, so that --rewrite-database=db1,db3 --rewrite-database=db2,db3 restores all tables and data from databases db1 and db2 into database db3.

    如果多个重写数据库选项之间发生冲突,则使用的最后一个重写数据库选项(从左到右读取)将生效。例如,如果使用--rewrite database=db1、db2--rewrite database=db1、db3,则只使用--rewrite database=db1、db3,并且--rewrite database=db1,则忽略db2。还可以从多个数据库恢复到单个数据库,以便--rewrite database=db1,db3--rewrite database=db2,db3将数据库db1和db2中的所有表和数据恢复到数据库db3中。

    Important

    When restoring from multiple backup databases into a single target database using --rewrite-database, no check is made for collisions between table or other object names, and the order in which rows are restored is not guaranteed. This means that it is possible in such cases for rows to be overwritten and updates to be lost.

    使用--rewrite database从多个备份数据库还原到单个目标数据库时,不检查表名或其他对象名之间的冲突,也不保证行的还原顺序。这意味着在这种情况下,可能会覆盖行并丢失更新。

  • --skip-broken-objects

    --跳过断开的对象

    Property Value
    Command-Line Format --skip-broken-objects

    This option causes ndb_restore to ignore corrupt tables while reading a native NDB backup, and to continue restoring any remaining tables (that are not also corrupted). Currently, the --skip-broken-objects option works only in the case of missing blob parts tables.

    此选项使ndb_restore在读取本机ndb备份时忽略损坏的表,并继续还原任何剩余的表(这些表也未损坏)。目前,--skip breaked objects选项仅在缺少blob parts表的情况下有效。

  • --skip-table-check, -s

    --跳过表检查,-s

    Property Value
    Command-Line Format --skip-table-check

    It is possible to restore data without restoring table metadata. By default when doing this, ndb_restore fails with an error if a mismatch is found between the table data and the table schema; this option overrides that behavior.

    可以在不还原表元数据的情况下还原数据。默认情况下,执行此操作时,如果在表数据和表架构之间发现不匹配,ndb_restore将失败,并显示错误;此选项将覆盖该行为。

    Some of the restrictions on mismatches in column definitions when restoring data using ndb_restore are relaxed; when one of these types of mismatches is encountered, ndb_restore does not stop with an error as it did previously, but rather accepts the data and inserts it into the target table while issuing a warning to the user that this is being done. This behavior occurs whether or not either of the options --skip-table-check or --promote-attributes is in use. These differences in column definitions are of the following types:

    使用ndb\u restore还原数据时,列定义中对不匹配的一些限制被放宽;当遇到这些类型的不匹配之一时,ndb\u restore不会像以前那样因错误而停止,而是接受数据并将其插入到目标表中,同时向用户发出正在执行此操作的警告。无论是否使用了选项--跳过表检查或--提升属性,都会发生此行为。列定义中的这些差异属于以下类型:

    • Different COLUMN_FORMAT settings (FIXED, DYNAMIC, DEFAULT)

      不同的列格式设置(固定、动态、默认)

    • Different STORAGE settings (MEMORY, DISK)

      不同的存储设置(内存、磁盘)

    • Different default values

      不同的默认值

    • Different distribution key settings

      不同的分发密钥设置

  • --skip-unknown-objects

    --跳过未知对象

    Property Value
    Command-Line Format --skip-unknown-objects

    This option causes ndb_restore to ignore any schema objects it does not recognize while reading a native NDB backup. This can be used for restoring a backup made from a cluster running (for example) NDB 7.6 to a cluster running NDB Cluster 7.5.

    此选项将导致ndb_restore忽略在读取本机ndb备份时无法识别的任何架构对象。这可用于将运行(例如)ndb 7.6的群集所做的备份还原到运行ndb7.5的群集。

  • --tab=dir_name, -T dir_name

    --tab=目录名,-t目录名

    Property Value
    Command-Line Format --tab=dir_name
    Type Directory name

    Causes --print-data to create dump files, one per table, each named tbl_name.txt. It requires as its argument the path to the directory where the files should be saved; use . for the current directory.

    原因--打印数据创建转储文件,每个表一个,每个名为tbl_name.txt。它需要文件保存目录的路径作为参数;使用。对于当前目录。

  • --verbose=#

    --冗长的=#

    Property Value
    Command-Line Format --verbose=#
    Type Numeric
    Default Value 1
    Minimum Value 0
    Maximum Value 255

    Sets the level for the verbosity of the output. The minimum is 0; the maximum is 255. The default value is 1.

    设置输出的详细程度。最小值为0,最大值为255。默认值为1。

Error reporting.  ndb_restore reports both temporary and permanent errors. In the case of temporary errors, it may able to recover from them, and reports Restore successful, but encountered temporary error, please look at configuration in such cases.

错误报告。ndb_restore报告临时和永久错误。在出现临时错误的情况下,它可能会从中恢复,并报告恢复成功,但遇到临时错误,请查看这种情况下的配置。

Important

After using ndb_restore to initialize an NDB Cluster for use in circular replication, binary logs on the SQL node acting as the replication slave are not automatically created, and you must cause them to be created manually. To cause the binary logs to be created, issue a SHOW TABLES statement on that SQL node before running START SLAVE. This is a known issue in NDB Cluster.

在使用ndb_restore初始化用于循环复制的ndb群集后,不会自动创建充当复制从属服务器的sql节点上的二进制日志,必须手动创建它们。要创建二进制日志,请在运行start slave之前在该sql节点上发出show tables语句。这是ndb集群中的已知问题。

Restoring a backup to a previous version of NDB Cluster.  You may encounter issues when restoring a backup taken from a later version of NDB Cluster to a previous one, due to the use of features which do not exist in the earlier version. For example, tables created in NDB 8.0 by default use the utf8mb4_ai_ci character set, which is not available in NDB 7.6 and earlier, and so cannot be read by an ndb_restore binary from one of these earlier versions.

正在将备份还原到以前版本的ndb群集。由于使用了早期版本中不存在的特性,当从NDB集群的后一版本恢复到以前的备份时可能遇到问题。例如,默认情况下,在ndb 8.0中创建的表使用utf8mb4_ai_ci字符集,这在ndb 7.6和更早版本中不可用,因此ndb_restore二进制文件无法从这些较早版本中的一个读取。

21.4.24.1 Restoring to a different number of data nodes

It is possible to restore from an NDB backup to a cluster having a different number of data nodes than the original from which the backup was taken. The following two sections discuss, respectively, the cases where the target cluster has a lesser or greater number of data nodes than the source of the backup.

可以从ndb备份还原到具有不同于原始备份的数据节点数的群集。以下两部分分别讨论目标群集的数据节点数小于或大于备份源的情况。

21.4.24.1.1 Restoring to Fewer Nodes Than the Original

You can restore to a cluster having fewer data nodes than the original provided that the larger number of nodes is an even multiple of the smaller number. In the following example, we use a backup taken on a cluster having four data nodes to a cluster having two data nodes.

如果较大的节点数是较小节点数的偶数倍,则可以还原到数据节点数少于原始节点数的群集。在下面的示例中,我们使用在具有四个数据节点的群集上执行的备份到具有两个数据节点的群集。

  1. The management server for the original cluster is on host host10. The original cluster has four data nodes, with the node IDs and host names shown in the following extract from the management server's config.ini file:

    原始群集的管理服务器位于主机host10上。原始集群有四个数据节点,节点ID和主机名显示在管理服务器config.ini文件的以下摘录中:

    [ndbd]
    NodeId=2
    HostName=host2
    
    [ndbd]
    NodeId=4
    HostName=host4
    
    [ndbd]
    NodeId=6
    HostName=host6
    
    [ndbd]
    NodeId=8
    HostName=host8
    

    We assume that each data node was originally started with ndbmtd --ndb-connectstring=host10 or the equivalent.

    我们假设每个数据节点最初都是以ndbmtd--ndb connectstring=host10或等效的方式启动的。

  2. Perform a backup in the normal manner. See Section 21.5.3.2, “Using The NDB Cluster Management Client to Create a Backup”, for information about how to do this.

    以正常方式执行备份。有关如何执行此操作的信息,请参阅21.5.3.2节,“使用ndb群集管理客户端创建备份”。

  3. The files created by the backup on each data node are listed here, where N is the node ID and B is the backup ID.

    这里列出了每个数据节点上备份创建的文件,其中n是节点id,b是备份id。

    • BACKUP-B-0.N.Data

      备份-b-0.n.数据

    • BACKUP-B.N.ctl

      备用B.N.CTL

    • BACKUP-B.N.log

      备份-B.N.LOG

    These files are found under BackupDataDir/BACKUP/BACKUP-B, on each data node. For the rest of this example, we assume that the backup ID is 1.

    这些文件位于每个数据节点上的backupdatedir/backup/backup-b下。对于本例的其余部分,我们假设备份id为1。

    Have all of these files available for later copying to the new data nodes (where they can be accessed on the data node's local file system by ndb_restore). It is simplest to copy them all to a single location; we assume that this is what you have done.

    让所有这些文件都可供以后复制到新的数据节点(ndb_restore可以在数据节点的本地文件系统上访问这些文件)。将它们全部复制到一个位置是最简单的;我们假设这就是您所做的。

  4. The management server for the target cluster is on host host20, and the target has two data nodes, with the node IDs and host names shown, from the management server config.ini file on host20:

    目标群集的管理服务器位于主机host20上,目标有两个数据节点,节点ID和主机名显示在主机20上的管理服务器config.ini文件中:

    [ndbd]
    NodeId=3
    hostname=host3
    
    [ndbd]
    NodeId=5
    hostname=host5
    

    Each of the data node processes on host3 and host5 should be started with ndbmtd -c host20 --initial or the equivalent, so that the new (target) cluster starts with clean data node file systems.

    host3和host5上的每个数据节点进程都应该以ndbmtd-c host20(initial或等效的)启动,这样新的(目标)集群就会以干净的数据节点文件系统启动。

  5. Copy two different sets of two backup files to each of the target data nodes. For this example, copy the backup files from nodes 2 and 4 from the original cluster to node 3 in the target cluster. These files are listed here:

    将两组不同的备份文件复制到每个目标数据节点。对于本例,将节点2和4的备份文件从原始群集复制到目标群集的节点3。以下列出了这些文件:

    • BACKUP-1-0.2.Data

      备份-1-0.2.数据

    • BACKUP-1.2.ctl

      备份-1.2.ctl

    • BACKUP-1.2.log

      备份-1.2.log

    • BACKUP-1-0.6.Data

      备份-1-0.6.数据

    • BACKUP-1.6.ctl

      备份-1.6.ctl

    • BACKUP-1.6.log

      备份-1.6.log

    Then copy the backup files from nodes 6 and 8 to node 5; these files are shown in the following list:

    然后将备份文件从节点6和8复制到节点5;这些文件显示在以下列表中:

    • BACKUP-1-0.4.Data

      备份-1-0.4.数据

    • BACKUP-1.4.ctl

      备份-1.4.ctl

    • BACKUP-1.4.log

      备份-1.4.log

    • BACKUP-1-0.8.Data

      备份-1-0.8.数据

    • BACKUP-1.8.ctl

      备份-1.8.ctl

    • BACKUP-1.8.log

      备份-1.8.log

    For the remainder of this example, we assume that the respective backup files have been saved to the directory /BACKUP-1 on each of nodes 3 and 5.

    对于本例的其余部分,我们假设各个备份文件已保存到节点3和节点5上的目录/backup-1。

  6. On each of the two target data nodes, you must restore from both sets of backups. First, restore the backups from nodes 2 and 4 to node 3 by invoking ndb_restore on host3 as shown here:

    在两个目标数据节点中的每个节点上,必须从两组备份中还原。首先,通过调用host3上的ndb_restore将节点2和4的备份还原到节点3,如下所示:

    shell> ndb_restore -c host20 --nodeid=2 --backupid=1 --restore-data --backup_path=/BACKUP-1
    
    shell> ndb_restore -c host20 --nodeid=4 --backupid=1 --restore-data --backup_path=/BACKUP-1
    

    Then restore the backups from nodes 6 and 8 to node 5 by invoking ndb_restore on host5, like this:

    然后通过调用host5上的ndb_restore将备份从节点6和8还原到节点5,如下所示:

    shell> ndb_restore -c host20 --nodeid=6 --backupid=1 --restore-data --backup_path=/BACKUP-1
    
    shell> ndb_restore -c host20 --nodeid=8 --backupid=1 --restore-data --backup_path=/BACKUP-1
    
21.4.24.1.2 Restoring to More Nodes Than the Original

The node ID specified for a given ndb_restore command is that of the node in the original backup and not that of the data node to restore it to. When performing a backup using the method described in this section, ndb_restore connects to the management server and obtains a list of data nodes in the cluster the backup is being restored to. The restored data is distributed accordingly, so that the number of nodes in the target cluster does not need to be to be known or calculated when performing the backup.

为给定ndb_restore命令指定的节点标识是原始备份中的节点标识,而不是要将其还原到的数据节点标识。当使用本节中描述的方法执行备份时,ndb_u restore将连接到管理服务器,并获取要将备份还原到的群集中的数据节点列表。还原的数据将相应地分布,以便在执行备份时不需要知道或计算目标集群中的节点数。

Note

When changing the total number of LCP threads or LQH threads per node group, you should recreate the schema from backup created using mysqldump.

更改每个节点组的LCP线程总数或LQH线程总数时,应使用mysqldump创建的备份重新创建架构。

  1. Create the backup of the data. You can do this by invoking the ndb_mgm client START BACKUP command from the system shell, like this:

    创建数据备份。可以通过从系统shell调用ndb-mgm client start backup命令来执行此操作,如下所示:

    shell> ndb_mgm -e "START BACKUP 1"
    

    This assumes that the desired backup ID is 1.

    这假设所需的备份ID为1。

  2. Create a backup of the schema. In NDB 7.5.2 and later, this step is necessary only if the total number of LCP threads or LQH threads per node group is changed.

    创建架构的备份。在ndb 7.5.2及更高版本中,仅当每个节点组的lcp线程或lqh线程总数发生更改时,才需要执行此步骤。

    shell> mysqldump --no-data --routines --events --triggers --databases > myschema.sql
    
    Important

    Once you have created the NDB native backup using ndb_mgm, you must not make any schema changes before creating the backup of the schema, if you do so.

    一旦使用ndb-mgm创建了ndb本机备份,则在创建该架构的备份之前,不得进行任何架构更改(如果是这样做的话)。

  3. Copy the backup directory to the new cluster. For example if the backup you want to restore has ID 1 and BackupDataDir = /backups/node_nodeid, then the path to the backup on this node is /backups/node_1/BACKUP/BACKUP-1. Inside this directory there are three files, listed here:

    将备份目录复制到新群集。例如,如果要还原的备份具有id 1和backupdatedir=/backups/node_nodeid,则此节点上备份的路径为/backups/node_1/backup/backup-1。在这个目录中有三个文件,如下所示:

    • BACKUP-1-0.1.Data

      备份-1-0.1.数据

    • BACKUP-1.1.ctl

      备份-1.1.ctl

    • BACKUP-1.1.log

      备份-1.1.log

    You should copy the entire directory to the new node.

    您应该将整个目录复制到新节点。

    If you needed to create a schema file, copy this to a location on an SQL node where it can be read by mysqld.

    如果需要创建模式文件,请将其复制到sql节点上mysqld可以读取的位置。

There is no requirement for the backup to be restored from a specific node or nodes.

不需要从一个或多个特定节点还原备份。

To restore from the backup just created, perform the following steps:

要从刚创建的备份还原,请执行以下步骤:

  1. Restore the schema.

    还原架构。

    • If you created a separate schema backup file using mysqldump, import this file using the mysql client, similar to what is shown here:

      如果使用mysqldump创建了单独的架构备份文件,请使用mysql客户端导入此文件,如下所示:

      shell> mysql < myschema.sql
      

      When importing the schema file, you may need to specify the --user and --password options (and possibly others) in addition to what is shown, in order for the mysql client to be able to connect to the MySQL server.

      导入模式文件时,除了显示的选项外,可能还需要指定--user和--password选项(可能还有其他选项),以便mysql客户端能够连接到mysql服务器。

    • If you did not need to create a schema file, you can re-create the schema using ndb_restore --restore-meta (short form -m), similar to what is shown here:

      如果不需要创建架构文件,可以使用ndb_restore--restore meta(缩写-m)重新创建架构,如下所示:

      shell> ndb_restore --nodeid=1 --backupid=1 --restore-meta --backup-path=/backups/node_1/BACKUP/BACKUP-1
      

      ndb_restore must be able to contact the management server; add the --ndb-connectstring option if and as needed to make this possible.

      ndb_restore必须能够与管理服务器联系;如果需要,请添加--ndb connectstring选项以实现此目的。

  2. Restore the data. This needs to be done once for each data node in the original cluster, each time using that data node's node ID. Assuming that there were 4 data nodes originally, the set of commands required would look something like this:

    还原数据。对于原始群集中的每个数据节点,每次都需要使用该数据节点的节点ID执行一次此操作。假设最初有4个数据节点,则所需的命令集如下所示:

    ndb_restore --nodeid=1 --backupid=1 --restore-data --backup_path=/backups/node_1/BACKUP/BACKUP-1 --disable-indexes
    ndb_restore --nodeid=2 --backupid=1 --restore-data --backup_path=/backups/node_2/BACKUP/BACKUP-1 --disable-indexes
    ndb_restore --nodeid=3 --backupid=1 --restore-data --backup_path=/backups/node_3/BACKUP/BACKUP-1 --disable-indexes
    ndb_restore --nodeid=4 --backupid=1 --restore-data --backup_path=/backups/node_4/BACKUP/BACKUP-1 --disable-indexes
    

    These can be run in parallel.

    这些可以并行运行。

    Be sure to add the --ndb-connectstring option as needed.

    请确保根据需要添加--ndb connectstring选项。

  3. Rebuild the indexes. These were disabled by the --disable-indexes option used in the commands just shown. Recreating the indexes avoids errors due to the restore not being consistent at all points. Rebuilding the indexes can also improve performance in some cases. To rebuild the indexes, execute the following command once, on a single node:

    重建索引。这些被刚才显示的命令中使用的--disable indexes选项禁用。重新创建索引可避免由于还原在所有点上都不一致而导致的错误。在某些情况下,重建索引还可以提高性能。要重建索引,请在单个节点上执行以下命令一次:

    shell> ndb_restore --nodeid=1 --backupid=1 --backup_path=/backups/node_1/BACKUP/BACKUP-1 --rebuild-indexes
    

    As mentioned previously, you may need to add the --ndb-connectstring option, so that ndb_restore can contact the management server.

    如前所述,您可能需要添加--ndb connectstring选项,以便ndb_restore可以联系管理服务器。

21.4.25 ndb_select_all — Print Rows from an NDB Table

ndb_select_all prints all rows from an NDB table to stdout.

ndb_select_all将ndb表中的所有行打印到stdout。

Usage

ndb_select_all -c connection_string tbl_name -d db_name [> file_name]

The following table includes options that are specific to the NDB Cluster native backup restoration program ndb_select_all. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_select_all), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包括特定于ndb cluster native backup restoration program ndb_select_all的选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndb_select_all),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.335 Command-line options for the ndb_select_all program

表21.335 ndb_select_all程序的命令行选项

Format Description Added, Deprecated, or Removed

--database=dbname,

--数据库=dbname,

-d

-丁

Name of the database in which the table is found

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--parallelism=#,

--并行度=,

-p

-第页

Degree of parallelism

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--lock=#,

--锁=,

-l

-一

Lock type

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--order=index,

--顺序=索引,

-o

-O型

Sort resultset according to index whose name is supplied

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--descending,

--下降,

-z

-Z轴

Sort resultset in descending order (requires order flag)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--header,

--头部,

-h

-小时

Print header (set to 0|FALSE to disable headers in output)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--useHexFormat,

--使用HexFormat,

-x

-十

Output numbers in hexadecimal format

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--delimiter=char,

--分隔符=字符,

-D

-丁

Set a column delimiter

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--disk

--磁盘

Print disk references (useful only for Disk Data tables having nonindexed columns)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--rowid

--罗维德

Print rowid

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--gci

--全球通信基础设施

Include GCI in output

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--gci64

--GCI64标准

Include GCI and row epoch in output

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--tupscan,

--图普斯卡,

-t

-T型

Scan in tup order

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--nodata

--野田

Do not print table column data

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


  • --database=dbname, -d dbname

    --数据库=dbname,-d dbname

    Name of the database in which the table is found. The default value is TEST_DB.

    在其中找到表的数据库的名称。默认值为test_db。

  • parallelism=#, -p #

    并行度=,-p#

    Specifies the degree of parallelism.

    指定并行度。

  • --lock=lock_type, -l lock_type

    --lock=lock_type,-l lock_type

    Employs a lock when reading the table. Possible values for lock_type are:

    在阅读表格时使用锁。锁类型的可能值为:

    • 0: Read lock

      0:读取锁定

    • 1: Read lock with hold

      1:保持读锁

    • 2: Exclusive read lock

      2:独占读锁

    There is no default value for this option.

    此选项没有默认值。

  • --order=index_name, -o index_name

    --order=索引名,-o索引名

    Orders the output according to the index named index_name.

    根据名为index_name的索引对输出进行排序。

    Note

    This is the name of an index, not of a column; the index must have been explicitly named when created.

    这是索引的名称,而不是列的名称;索引在创建时必须已显式命名。

  • --descending, -z

    --下降,-z

    Sorts the output in descending order. This option can be used only in conjunction with the -o (--order) option.

    按降序排列输出。此选项只能与-o(--order)选项一起使用。

  • --header=FALSE

    --头=假

    Excludes column headers from the output.

    从输出中排除列标题。

  • --useHexFormat -x

    --使用HexFormat-x

    Causes all numeric values to be displayed in hexadecimal format. This does not affect the output of numerals contained in strings or datetime values.

    使所有数值以十六进制格式显示。这不会影响字符串或日期时间值中包含的数字的输出。

  • --delimiter=character, -D character

    --分隔符=字符,-d字符

    Causes the character to be used as a column delimiter. Only table data columns are separated by this delimiter.

    使字符用作列分隔符。只有表数据列由此分隔符分隔。

    The default delimiter is the tab character.

    默认分隔符是制表符。

  • --disk

    --磁盘

    Adds a disk reference column to the output. The column is nonempty only for Disk Data tables having nonindexed columns.

    将磁盘引用列添加到输出。该列仅对具有非索引列的磁盘数据表是非空的。

  • --rowid

    --罗维德

    Adds a ROWID column providing information about the fragments in which rows are stored.

    添加rowid列,提供有关存储行的片段的信息。

  • --gci

    --全球通信基础设施

    Adds a GCI column to the output showing the global checkpoint at which each row was last updated. See Section 21.1, “NDB Cluster Overview”, and Section 21.5.6.2, “NDB Cluster Log Events”, for more information about checkpoints.

    在输出中添加一个gci列,显示最后更新每一行的全局检查点。有关检查点的详细信息,请参阅第21.1节“ndb群集概述”和第21.5.6.2节“ndb群集日志事件”。

  • --gci64

    --GCI64标准

    Adds a ROW$GCI64 column to the output showing the global checkpoint at which each row was last updated, as well as the number of the epoch in which this update occurred.

    将行$gci64列添加到输出中,显示最后更新每行的全局检查点,以及发生此更新的纪元数。

  • --tupscan, -t

    --图普斯卡,-t

    Scan the table in the order of the tuples.

    按元组顺序扫描表。

  • --nodata

    --野田

    Causes any table data to be omitted.

    导致忽略任何表数据。

Sample Output

Output from a MySQL SELECT statement:

mysql select语句的输出:

mysql> SELECT * FROM ctest1.fish;
+----+-----------+
| id | name      |
+----+-----------+
|  3 | shark     |
|  6 | puffer    |
|  2 | tuna      |
|  4 | manta ray |
|  5 | grouper   |
|  1 | guppy     |
+----+-----------+
6 rows in set (0.04 sec)

Output from the equivalent invocation of ndb_select_all:

从ndb_select_all的等效调用输出:

shell> ./ndb_select_all -c localhost fish -d ctest1
id      name
3       [shark]
6       [puffer]
2       [tuna]
4       [manta ray]
5       [grouper]
1       [guppy]
6 rows returned

NDBT_ProgramExit: 0 - OK

All string values are enclosed by square brackets ([...]) in the output of ndb_select_all. For another example, consider the table created and populated as shown here:

所有字符串值都由方括号([…])括在ndb_select_all的输出中。另一个例子是,考虑如下所示创建和填充的表:

CREATE TABLE dogs (
    id INT(11) NOT NULL AUTO_INCREMENT,
    name VARCHAR(25) NOT NULL,
    breed VARCHAR(50) NOT NULL,
    PRIMARY KEY pk (id),
    KEY ix (name)
)
TABLESPACE ts STORAGE DISK
ENGINE=NDBCLUSTER;

INSERT INTO dogs VALUES
    ('', 'Lassie', 'collie'),
    ('', 'Scooby-Doo', 'Great Dane'),
    ('', 'Rin-Tin-Tin', 'Alsatian'),
    ('', 'Rosscoe', 'Mutt');

This demonstrates the use of several additional ndb_select_all options:

这演示了多个附加的ndb_select_all选项的使用:

shell> ./ndb_select_all -d ctest1 dogs -o ix -z --gci --disk
GCI     id name          breed        DISK_REF
834461  2  [Scooby-Doo]  [Great Dane] [ m_file_no: 0 m_page: 98 m_page_idx: 0 ]
834878  4  [Rosscoe]     [Mutt]       [ m_file_no: 0 m_page: 98 m_page_idx: 16 ]
834463  3  [Rin-Tin-Tin] [Alsatian]   [ m_file_no: 0 m_page: 34 m_page_idx: 0 ]
835657  1  [Lassie]      [Collie]     [ m_file_no: 0 m_page: 66 m_page_idx: 0 ]
4 rows returned

NDBT_ProgramExit: 0 - OK

21.4.26 ndb_select_count — Print Row Counts for NDB Tables

ndb_select_count prints the number of rows in one or more NDB tables. With a single table, the result is equivalent to that obtained by using the MySQL statement SELECT COUNT(*) FROM tbl_name.

ndb_select_count打印一个或多个ndb表中的行数。对于单个表,结果相当于使用mysql语句select count(*)from tbl_name获得的结果。

Usage

ndb_select_count [-c connection_string] -ddb_name tbl_name[, tbl_name2[, ...]]

The following table includes options that are specific to the NDB Cluster native backup restoration program ndb_select_count. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_select_count), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb cluster native backup restoration program ndb_select_count的选项。其他说明见下表。有关大多数ndb群集程序的公用选项(包括ndb_select_count),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.336 Command-line options for the ndb_select_count program

表21.336 ndb_select_count程序的命令行选项

Format Description Added, Deprecated, or Removed

--database=dbname,

--数据库=dbname,

-d

-丁

Name of the database in which the table is found

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--parallelism=#,

--并行度=,

-p

-第页

Degree of parallelism

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--lock=#,

--锁=,

-l

-一

Lock type

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


You can obtain row counts from multiple tables in the same database by listing the table names separated by spaces when invoking this command, as shown under Sample Output.

调用此命令时,通过列出用空格分隔的表名,可以从同一数据库中的多个表获取行计数,如示例输出下所示。

Sample Output

shell> ./ndb_select_count -c localhost -d ctest1 fish dogs
6 records in table fish
4 records in table dogs

NDBT_ProgramExit: 0 - OK

21.4.27 ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster

ndb_setup.py starts the NDB Cluster Auto-Installer and opens the installer's Start page in the default Web browser.

ndb_setup.py启动ndb cluster自动安装程序,并在默认Web浏览器中打开安装程序的起始页。

Important

This program is intended to be invoked as a normal user, and not with the mysql, system root or other administrative account.

此程序将作为普通用户调用,而不是使用mysql、系统根或其他管理帐户。

This section describes usage of and program options for the command-line tool only. For information about using the Auto-Installer GUI that is spawned when ndb_setup.py is invoked, see Section 21.2.1, “The NDB Cluster Auto-Installer (NDB 7.5)”.

本节仅介绍命令行工具和程序选项的用法。有关使用在调用ndb_setup.py时生成的自动安装程序gui的信息,请参阅21.2.1节,“ndb群集自动安装程序(ndb 7.5)”。

Usage

All platforms:

所有平台:

ndb_setup.py [options]

Additionally, on Windows platforms only:

此外,仅在Windows平台上:

setup.bat [options]

The following table includes all options that are supported by the NDB Cluster installation and configuration program ndb_setup.py. Additional descriptions follow the table.

下表包括ndb cluster installation and configuration program ndb_setup.py支持的所有选项。其他说明见下表。

Table 21.337 Command-line options for the ndb_setup.py program

表21.337 ndb_setup.py程序的命令行选项

Format Description Added, Deprecated, or Removed

--browser-start-page=filename,

--浏览器起始页=文件名,

-s

-S公司

Page that the web browser opens when starting.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--ca-certs-file=filename,

--ca certs file=文件名,

-a

-一个

File containing list of client certificates allowed to connect to the server

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--cert-file=filename,

--证书文件=文件名,

-c

-C类

File containing X509 certificate that identifies the server. (Default: cfg.pem)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--debug-level=level,

--调试级别=级别,

-d

-丁

Python logging module debug level. One of DEBUG, INFO, WARNING (default), ERROR, or CRITICAL.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--help,

--救命啊,

-h

-小时

Print help message

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--key-file=file,

--key file=文件,

-k

-千

Specify file containing private key (if not included in --cert-file)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--no-browser,

--没有浏览器,

-n

-n个

Do not open the start page in a browser, merely start the tool

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--port=#,

--端口=,

-p

-第页

Specify the port used by the web server

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--server-log-file=file,

--服务器日志文件=文件,

o

O型

Log requests to this file. Use '-' to force logging to stderr instead.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--server-name=name,

--服务器名称=名称,

-N

-N

The name of the server to connect with

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--use-http,

--使用http,

-H

-小时

Use unencrypted (HTTP) client/server connection

NDB 7.6 and later

NDB 7.6及更高版本

--use-https,

--使用https,

-S

-S公司

Use encrypted (HTTPS) client/server connection

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


  • --browser-start-page=file, -s

    --browser start page=文件,-s

    Property Value
    Command-Line Format --browser-start-page=filename
    Type String
    Default Value index.html

    Specify the file to open in the browser as the installation and configuration Start page. The default is index.html.

    指定要在浏览器中打开的文件作为安装和配置起始页。默认为index.html。

  • --ca-certs-file=file, -a

    --ca certs file=文件,-a

    Property Value
    Command-Line Format --ca-certs-file=filename
    Type File name
    Default Value [none]

    Specify a file containing a list of client certificates which are allowed to connect to the server. The default is an empty string, which means that no client authentication is used.

    指定包含允许连接到服务器的客户端证书列表的文件。默认值为空字符串,这意味着不使用客户端身份验证。

  • --cert-file=file, -c

    --cert file=文件,-c

    Property Value
    Command-Line Format --cert-file=filename
    Type File name
    Default Value /usr/share/mysql/mcc/cfg.pem

    Specify a file containing an X.509 certificate which identifies the server. It is possible for the certificate to be self-signed. The default is cfg.pem.

    指定包含标识服务器的X.509证书的文件。证书可以自签名。默认值为cfg.pem。

  • --debug-level=level, -d

    --调试级别=级别,-d

    Property Value
    Command-Line Format --debug-level=level
    Type Enumeration
    Default Value WARNING
    Valid Values

    WARNING

    警告

    DEBUG

    调试

    INFO

    信息

    ERROR

    错误

    CRITICAL

    关键的

    Set the Python logging module debug level. This is one of DEBUG, INFO, WARNING, ERROR, or CRITICAL. WARNING is the default.

    设置python日志模块调试级别。这是debug、info、warning、error或critical之一。警告是默认设置。

  • --help, -h

    --救命,-h

    Property Value
    Command-Line Format --help

    Print a help message.

    打印帮助消息。

  • --key-file=file, -d

    --密钥文件=文件,-d

    Property Value
    Command-Line Format --key-file=file
    Type File name
    Default Value [none]

    Specify a file containing the private key if this is not included in the X. 509 certificate file (--cert-file). The default is an empty string, which means that no such file is used.

    如果X.509证书文件(--cert文件)中不包含私钥,请指定包含私钥的文件。默认为空字符串,这意味着不使用此类文件。

  • --no-browser, -n

    --没有浏览器,-n

    Property Value
    Command-Line Format --no-browser

    Start the installation and configuration tool, but do not open the Start page in a browser.

    启动安装和配置工具,但不要在浏览器中打开起始页。

  • --port=#, -p

    --端口=,-p

    Property Value
    Command-Line Format --port=#
    Type Numeric
    Default Value 8081
    Minimum Value 1
    Maximum Value 65535

    Set the port used by the web server. The default is 8081.

    设置Web服务器使用的端口。默认值为8081。

  • --server-log-file=file, -o

    --服务器日志文件=文件,-o

    Property Value
    Command-Line Format --server-log-file=file
    Type File name
    Default Value ndb_setup.log
    Valid Values

    ndb_setup.log

    ndb_安装程序.log

    - (Log to stderr)

    -(日志到stderr)

    Log requests to this file. The default is ndb_setup.log. To specify logging to stderr, rather than to a file, use a - (dash character) for the file name.

    记录对此文件的请求。默认为ndb_setup.log。要指定记录到stderr而不是文件,请使用-(破折号字符)作为文件名。

  • --server-name=host, -N

    --服务器名称=主机,-n

    Property Value
    Command-Line Format --server-name=name
    Type String
    Default Value localhost

    Specify the host name or IP address for the browser to use when connecting. The default is localhost.

    指定连接时浏览器要使用的主机名或IP地址。默认值为localhost。

  • --use-http, -H

    --使用http,-h

    Property Value
    Command-Line Format --use-http

    Make the browser use HTTP to connect with the server. This means that the connection is unencrypted and not secured in any way.

    使浏览器使用http连接服务器。这意味着连接是未加密的,没有任何安全措施。

    This option was added in NDB 7.6.

    此选项已添加到ndb 7.6中。

  • --use-https, -S

    --使用https,-s

    Property Value
    Command-Line Format --use-https

    Make the browser use a secure (HTTPS) connection with the server.

    使浏览器与服务器使用安全(https)连接。

21.4.28 ndb_show_tables — Display List of NDB Tables

ndb_show_tables displays a list of all NDB database objects in the cluster. By default, this includes not only both user-created tables and NDB system tables, but NDB-specific indexes, internal triggers, and NDB Cluster Disk Data objects as well.

ndb_show_tables显示群集中所有ndb数据库对象的列表。默认情况下,这不仅包括用户创建的表和ndb系统表,还包括ndb特定的索引、内部触发器和ndb集群磁盘数据对象。

The following table includes options that are specific to the NDB Cluster native backup restoration program ndb_show_tables. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_show_tables), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包括特定于ndb cluster native backup restoration program ndb_show_tables的选项。其他说明见下表。有关大多数ndb群集程序(包括ndb_show_tables)的公用选项,请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.338 Command-line options for the ndb_show_tables program

表21.338 ndb_show_tables程序的命令行选项

Format Description Added, Deprecated, or Removed

--database=string,

--数据库=字符串,

-d

-丁

Specifies the database in which the table is found

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--loops=#,

--循环=,

-l

-一

Number of times to repeat output

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--parsable,

--可分解的,

-p

-第页

Return output suitable for MySQL LOAD DATA INFILE statement

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--show-temp-status

--显示温度状态

Show table temporary flag

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--type=#,

--类型=,

-t

-T型

Limit output to objects of this type

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--unqualified,

--不合格,

-u

-U型

Do not qualify table names

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


Usage

ndb_show_tables [-c connection_string]
  • --database, -d

    --数据库,-d

    Specifies the name of the database in which the tables are found. If this option has not been specified, and no tables are found in the TEST_DB database, ndb_show_tables issues a warning.

    指定在其中找到表的数据库的名称。如果未指定此选项,并且在test_db数据库中找不到表,则ndb_show_tables会发出警告。

  • --loops, -l

    --循环,-l

    Specifies the number of times the utility should execute. This is 1 when this option is not specified, but if you do use the option, you must supply an integer argument for it.

    指定实用程序应执行的次数。如果未指定此选项,则为1,但如果确实使用此选项,则必须为其提供整数参数。

  • --parsable, -p

    --可分解,-p

    Using this option causes the output to be in a format suitable for use with LOAD DATA.

    使用此选项会使输出的格式适合与加载数据一起使用。

  • --show-temp-status

    --显示温度状态

    If specified, this causes temporary tables to be displayed.

    如果指定,这将导致显示临时表。

  • --type, -t

    --类型,-t

    Can be used to restrict the output to one type of object, specified by an integer type code as shown here:

    可用于将输出限制为一种对象类型,由整数类型代码指定,如下所示:

    • 1: System table

      1:系统表

    • 2: User-created table

      2:用户创建的表

    • 3: Unique hash index

      3:唯一哈希索引

    Any other value causes all NDB database objects to be listed (the default).

    任何其他值都会导致列出所有ndb数据库对象(默认值)。

  • --unqualified, -u

    --不合格,-u

    If specified, this causes unqualified object names to be displayed.

    如果指定,这将导致显示不合格的对象名称。

Note

Only user-created NDB Cluster tables may be accessed from MySQL; system tables such as SYSTAB_0 are not visible to mysqld. However, you can examine the contents of system tables using NDB API applications such as ndb_select_all (see Section 21.4.25, “ndb_select_all — Print Rows from an NDB Table”).

只能从mysql访问用户创建的ndb集群表;mysqld看不到诸如systab_0之类的系统表。但是,您可以使用ndb api应用程序(如ndb_select_all)检查系统表的内容(请参阅第21.4.25节“ndb_select_all-从ndb表打印行”)。

21.4.29 ndb_size.pl — NDBCLUSTER Size Requirement Estimator

This is a Perl script that can be used to estimate the amount of space that would be required by a MySQL database if it were converted to use the NDBCLUSTER storage engine. Unlike the other utilities discussed in this section, it does not require access to an NDB Cluster (in fact, there is no reason for it to do so). However, it does need to access the MySQL server on which the database to be tested resides.

这是一个perl脚本,可用于估计mysql数据库在转换为使用ndbcluster存储引擎时所需的空间量。与本节讨论的其他实用程序不同,它不需要访问ndb集群(事实上,它没有这样做的理由)。但是,它确实需要访问要测试的数据库所在的mysql服务器。

Requirements

  • A running MySQL server. The server instance does not have to provide support for NDB Cluster.

    一个正在运行的mysql服务器。服务器实例不必为ndb群集提供支持。

  • A working installation of Perl.

    Perl的有效安装。

  • The DBI module, which can be obtained from CPAN if it is not already part of your Perl installation. (Many Linux and other operating system distributions provide their own packages for this library.)

    dbi模块,如果它还不是perl安装的一部分,可以从cpan获得。(许多Linux和其他操作系统发行版都为这个库提供了自己的软件包。)

  • A MySQL user account having the necessary privileges. If you do not wish to use an existing account, then creating one using GRANT USAGE ON db_name.*—where db_name is the name of the database to be examined—is sufficient for this purpose.

    具有必要权限的mysql用户帐户。如果您不希望使用现有的帐户,那么在dByNo.**上使用Grand用法创建一个帐户,其中dByNoX是要检查的数据库的名称就足够了。

ndb_size.pl can also be found in the MySQL sources in storage/ndb/tools.

ndb_size.pl也可以在storage/ndb/tools中的mysql源文件中找到。

The following table includes options that are specific to the NDB Cluster program ndb_size.pl. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_size.pl), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包括特定于ndb cluster program ndb_size.pl的选项。有关大多数ndb群集程序的公用选项(包括ndb_size.pl),请参阅第21.4.32节“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.339 Command-line options for the ndb_size.pl program

表21.339 ndb_size.pl程序的命令行选项

Format Description Added, Deprecated, or Removed

--database=dbname

--数据库=数据库名

The database or databases to examine; accepts a comma-delimited list; the default is ALL (use all databases found on the server)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--hostname[:port]

--主机名[:端口]

Specify host and optional port as host[:port]

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--socket=file_name

--socket=文件名

Specify a socket to connect to

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--user=string

--用户=字符串

Specify a MySQL user name

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--password=string

--密码=字符串

Specify a MySQL user password

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--format=string

--format=字符串

Set output format (text or HTML)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--excludetables=tbl_list

--excludetables=待处理列表

Skip any tables in a comma-separated list of tables

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--excludedbs=db_list

--excludedbs=db_列表

Skip any databases in a comma-separated list of databases

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--savequeries=file

--saveQueries=文件

Saves all queries to the database into the file specified

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--loadqueries=file

--loadQueries=文件

Loads all queries from the file specified; does not connect to a database

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--real_table_name=table

--real_table_name=表格

Designates a table to handle unique index size calculations

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


Usage

perl ndb_size.pl [--database={db_name|ALL}] [--hostname=host[:port]] [--socket=socket] \
      [--user=user] [--password=password]  \
      [--help|-h] [--format={html|text}] \
      [--loadqueries=file_name] [--savequeries=file_name]

By default, this utility attempts to analyze all databases on the server. You can specify a single database using the --database option; the default behavior can be made explicit by using ALL for the name of the database. You can also exclude one or more databases by using the --excludedbs option with a comma-separated list of the names of the databases to be skipped. Similarly, you can cause specific tables to be skipped by listing their names, separated by commas, following the optional --excludetables option. A host name can be specified using --hostname; the default is localhost. You can specify a port in addition to the host using host:port format for the value of --hostname. The default port number is 3306. If necessary, you can also specify a socket; the default is /var/lib/mysql.sock. A MySQL user name and password can be specified the corresponding options shown. It also possible to control the format of the output using the --format option; this can take either of the values html or text, with text being the default. An example of the text output is shown here:

默认情况下,此实用程序尝试分析服务器上的所有数据库。可以使用--database选项指定单个数据库;默认行为可以通过对数据库名称使用all来显式化。还可以使用--excludedbs选项排除一个或多个数据库,该选项带有要跳过的数据库名称的逗号分隔列表。类似地,可以通过在可选的--excludetables选项后面列出特定表的名称(用逗号分隔),从而跳过这些表。可以使用--host name指定主机名;默认值为localhost。可以使用host:port格式为值--hostname指定主机之外的端口。默认端口号为3306。如果需要,还可以指定套接字;默认值是/var/lib/mysql.sock。mysql用户名和密码可以通过显示的相应选项指定。还可以使用--format选项控制输出的格式;这可以采用html或text值,其中text是默认值。文本输出的示例如下所示:

shell> ndb_size.pl --database=test --socket=/tmp/mysql.sock
ndb_size.pl report for database: 'test' (1 tables)
--------------------------------------------------
Connected to: DBI:mysql:host=localhost;mysql_socket=/tmp/mysql.sock

Including information for versions: 4.1, 5.0, 5.1

test.t1
-------

DataMemory for Columns (* means varsized DataMemory):
         Column Name            Type  Varsized   Key  4.1  5.0   5.1
     HIDDEN_NDB_PKEY          bigint             PRI    8    8     8
                  c2     varchar(50)         Y         52   52    4*
                  c1         int(11)                    4    4     4
                                                       --   --    --
Fixed Size Columns DM/Row                              64   64    12
   Varsize Columns DM/Row                               0    0     4

DataMemory for Indexes:
   Index Name                 Type        4.1        5.0        5.1
      PRIMARY                BTREE         16         16         16
                                           --         --         --
       Total Index DM/Row                  16         16         16

IndexMemory for Indexes:
               Index Name        4.1        5.0        5.1
                  PRIMARY         33         16         16
                                  --         --         --
           Indexes IM/Row         33         16         16

Summary (for THIS table):
                                 4.1        5.0        5.1
    Fixed Overhead DM/Row         12         12         16
           NULL Bytes/Row          4          4          4
           DataMemory/Row         96         96         48
                    (Includes overhead, bitmap and indexes)

  Varsize Overhead DM/Row          0          0          8
   Varsize NULL Bytes/Row          0          0          4
       Avg Varside DM/Row          0          0         16

                 No. Rows          0          0          0

        Rows/32kb DM Page        340        340        680
Fixedsize DataMemory (KB)          0          0          0

Rows/32kb Varsize DM Page          0          0       2040
  Varsize DataMemory (KB)          0          0          0

         Rows/8kb IM Page        248        512        512
         IndexMemory (KB)          0          0          0

Parameter Minimum Requirements
------------------------------
* indicates greater than default

                Parameter     Default        4.1         5.0         5.1
          DataMemory (KB)       81920          0           0           0
       NoOfOrderedIndexes         128          1           1           1
               NoOfTables         128          1           1           1
         IndexMemory (KB)       18432          0           0           0
    NoOfUniqueHashIndexes          64          0           0           0
           NoOfAttributes        1000          3           3           3
             NoOfTriggers         768          5           5           5

For debugging purposes, the Perl arrays containing the queries run by this script can be read from the file specified using can be saved to a file using --savequeries; a file containing such arrays to be read during script execution can be specified using --loadqueries. Neither of these options has a default value.

出于调试目的,可以从使用指定的文件中读取包含此脚本运行的查询的Perl数组,可以使用--saveQueries将其保存到文件中;可以使用--loadQueries指定包含脚本执行期间要读取的此类数组的文件。这两个选项都没有默认值。

To produce output in HTML format, use the --format option and redirect the output to a file, as shown here:

要以HTML格式生成输出,请使用--format选项并将输出重定向到文件,如下所示:

shell> ndb_size.pl --database=test --socket=/tmp/mysql.sock --format=html > ndb_size.html

(Without the redirection, the output is sent to stdout.)

(如果没有重定向,输出将发送到stdout。)

The output from this script includes the following information:

此脚本的输出包含以下信息:

  • Minimum values for the DataMemory, IndexMemory, MaxNoOfTables, MaxNoOfAttributes, MaxNoOfOrderedIndexes, and MaxNoOfTriggers configuration parameters required to accommodate the tables analyzed.

    数据内存、indexMemory、maxNoofTables、maxNoofattributes、maxNoofderedindexes和maxNooftriggers配置参数的最小值,这些参数用于容纳所分析的表。

  • Memory requirements for all of the tables, attributes, ordered indexes, and unique hash indexes defined in the database.

    数据库中定义的所有表、属性、有序索引和唯一哈希索引的内存需求。

  • The IndexMemory and DataMemory required per table and table row.

    每个表和表行所需的indexMemory和dataMemory。

21.4.30 ndb_top — View CPU usage information for NDB threads

ndb_top displays running information in the terminal about CPU usage by NDB threads on an NDB Cluster data node. Each thread is represented by two rows in the output, the first showing system statistics, the second showing the measured statistics for the thread.

ndb_u top在终端中显示ndb集群数据节点上ndb线程的cpu使用情况。每个线程在输出中由两行表示,第一行显示系统统计信息,第二行显示线程的测量统计信息。

ndb_top is available beginning with MySQL NDB Cluster 7.6.3.

ndb_top从mysql ndb cluster 7.6.3开始提供。

Usage

ndb_top [-h hostname] [-t port] [-u user] [-p pass] [-n node_id]

ndb_top connects to a MySQL Server running as an SQL node of the cluster. By default, it attempts to connect to a mysqld running on localhost and port 3306, as the MySQL root user with no password specified. You can override the default host and port using, respectively, --host (-h) and --port (-t). To specify a MySQL user and password, use the --user (-u) and --passwd (-p) options. This user must be able to read tables in the ndbinfo database (ndb_top uses information from ndbinfo.cpustat and related tables).

ndb_top连接到作为集群的sql节点运行的mysql服务器。默认情况下,它尝试连接到本地主机和端口3306上运行的mysqld,作为没有指定密码的mysql根用户。可以分别使用,-host(-h)和--port(-t)重写默认主机和端口。要指定mysql用户和密码,请使用--user(-u)和--passwd(-p)选项。此用户必须能够读取ndbinfo数据库中的表(ndb_top使用来自ndbinfo.cpustat和相关表的信息)。

For more information about MySQL user accounts and passwords, see Section 6.2, “Access Control and Account Management”.

有关mysql用户帐户和密码的更多信息,请参见第6.2节“访问控制和帐户管理”。

Output is available as plain text or an ASCII graph; you can specify this using the --text (-x) and --graph (-g) options, respectively. These two display modes provide the same information; they can be used concurrently. At least one display mode must be in use.

输出可以是纯文本或ascii图形;您可以分别使用--text(-x)和--graph(-g)选项指定它。这两种显示模式提供相同的信息;它们可以同时使用。必须至少使用一种显示模式。

Color display of the graph is supported and enabled by default (--color or -c option). With color support enabled, the graph display shows OS user time in blue, OS system time in green, and idle time as blank. For measured load, blue is used for execution time, yellow for send time, red for time spent in send buffer full waits, and blank spaces for idle time. The percentage shown in the graph display is the sum of percentages for all threads which are not idle. Colors are not currently configurable; you can use grayscale instead by using --skip-color.

默认情况下支持并启用图形的颜色显示(--color或-c选项)。启用颜色支持后,图形显示将操作系统用户时间显示为蓝色,操作系统系统时间显示为绿色,空闲时间显示为空白。对于测量的负载,蓝色用于执行时间,黄色用于发送时间,红色用于发送缓冲区完全等待时间,空白用于空闲时间。图形显示中显示的百分比是所有非空闲线程的百分比之和。颜色当前不可配置;您可以使用--skip color来代替灰度。

The sorted view (--sort, -r) is based on the maximum of the measured load and the load reported by the OS. Display of these can be enabled and disabled using the --measured-load (-m) and --os-load (-o) options. Display of at least one of these loads must be enabled.

排序的视图(排序,-R)是基于所测量的负载的最大值和由OS报告的负载。可以使用--measured load(-m)和--os load(-o)选项启用和禁用这些选项的显示。必须至少启用其中一个负载的显示。

The program tries to obtain statistics from a data node having the node ID given by the --node-id (-n) option; if unspecified, this is 1. ndb_top cannot provide information about other types of nodes.

程序尝试从具有由--node id(-n)选项给定的节点id的数据节点获取统计信息;如果未指定,则为1。ndb_top无法提供有关其他类型节点的信息。

The view adjusts itself to the height and width of the terminal window; the minimum supported width is 76 characters.

视图根据终端窗口的高度和宽度调整自身;支持的最小宽度为76个字符。

Once started, ndb_top runs continuously until forced to exit; you can quit the program using Ctrl-C. The display updates once per second; to set a different delay interval, use --sleep-time (-s).

一旦启动,NdByToppe连续运行直到强制退出;您可以退出使用CTRL C的程序。显示每秒更新一次;设置不同的延迟间隔,使用-睡眠时间(-s)。

Note

ndb_top is available on Mac OS X, Linux, and Solaris. It is not currently supported on Windows platforms.

ndb_top可用于mac os x、linux和solaris。它目前在windows平台上不受支持。

The following table includes all options that are specific to the NDB Cluster program ndb_top. Additional descriptions follow the table.

下表包含特定于ndb cluster program ndb_u top的所有选项。其他说明见下表。

Table 21.340 Command-line options for the ndb_top program

表21.340 ndb-top程序的命令行选项

Format Description Added, Deprecated, or Removed

--color,

--颜色,

-c

-C类

Show ASCII graphs in color; use --skip-colors to disable

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--graph,

--图表,

-g

-克

Display data using graphs; use --skip-graphs to disable

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--help,

--救命啊,

-?

-是吗?

Show program usage information

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--host[=name],

--主机[=名称],

-h

-小时

Host name or IP address of MySQL Server to connect to

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--measured-load,

--测量负载,

-m

-米

Show measured load by thread

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--node-id[=#],

--节点ID[=],

-n

-n个

Watch node having this node ID

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--os-load,

--操作系统负载,

-o

-O型

Show load measured by operating system

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--passwd[=password],

--密码[=密码],

-p

-第页

Connect using this password

ADDED: NDB 7.6.3

增加:NDB 7.6.3

REMOVED: NDB 7.6.4

删除:ndb 7.6.4

--password[=password],

--密码[=密码],

-p

-第页

Connect using this password

ADDED: NDB 7.6.6

增加:NDB 7.6.6

--port[=#],

--端口[=],

-t (<=7.6.5),

-t(<=7.6.5),

-P (>=7.6.6)

-P(>=7.6.6)

Port number to use when connecting to MySQL Server

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--sleep-time[=seconds],

--睡眠时间[=秒],

-s

-S公司

Time to wait between display refreshes, in seconds

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--socket,

--插座,

-S

-S公司

Socket file to use for connection.

ADDED: NDB 7.6.6

增加:NDB 7.6.6

--sort,

--排序,

-r

-右

Sort threads by usage; use --skip-sort to disable

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--text,

--文本,

-x (<=7.6.5),

-X(<=7.6.5),

-t (>=7.6.6)

-T(>=7.6.6)

Display data using text

ADDED: NDB 7.6.3

增加:NDB 7.6.3

--user[=name],

--用户[=名称],

-u

-U型

Connect as this MySQL user

ADDED: NDB 7.6.3

增加:NDB 7.6.3


In NDB 7.6.6 and later, ndb_top also supports the common NDB program options --defaults-file, --defaults-extra-file, --print-defaults, --no-defaults, and --defaults-group-suffix. (Bug #86614, Bug #26236298)

在ndb 7.6.6及更高版本中,ndb_top还支持常用的ndb程序选项--defaults file,--defaults extra file,--print defaults,--no defaults,和--defaults group suffix。(错误86614,错误26236298)

Additional Options

  • --color, -c

    --颜色,-C

    Property Value
    Command-Line Format --color
    Introduced 5.7.19-ndb-7.6.3
    Type Boolean
    Default Value TRUE

    Show ASCII graphs in color; use --skip-colors to disable.

    以颜色显示ascii图形;使用--skip colors禁用。

  • --graph, -g

    --图,-g

    Property Value
    Command-Line Format --graph
    Introduced 5.7.19-ndb-7.6.3
    Type Boolean
    Default Value TRUE

    Display data using graphs; use --skip-graphs to disable. This option or --text must be true; both options may be true.

    使用图形显示数据;使用--skip graphs禁用。此选项或--text必须为true;两个选项都可以为true。

  • --help, -?

    --救命啊,-?

    Property Value
    Command-Line Format --help
    Introduced 5.7.19-ndb-7.6.3
    Type Boolean
    Default Value TRUE

    Show program usage information.

    显示程序使用信息。

  • --host[=name], -h

    --主机[=名称],-h

    Property Value
    Command-Line Format --host[=name]
    Introduced 5.7.19-ndb-7.6.3
    Type String
    Default Value localhost

    Host name or IP address of MySQL Server to connect to.

    要连接到的MySQL服务器的主机名或IP地址。

  • --measured-load, -m

    --实测荷载,-m

    Property Value
    Command-Line Format --measured-load
    Introduced 5.7.19-ndb-7.6.3
    Type Boolean
    Default Value FALSE

    Show measured load by thread. This option or --os-load must be true; both options may be true.

    按螺纹显示测量负载。此选项或--os load必须为true;这两个选项都可以为true。

  • --node-id[=#], -n

    --节点ID[=],-n

    Property Value
    Command-Line Format --node-id[=#]
    Introduced 5.7.19-ndb-7.6.3
    Type Integer
    Default Value 1

    Watch the data node having this node ID.

    监视具有此节点ID的数据节点。

  • --os-load, -o

    --操作系统负载,-o

    Property Value
    Command-Line Format --os-load
    Introduced 5.7.19-ndb-7.6.3
    Type Boolean
    Default Value TRUE

    Show load measured by operating system. This option or --measured-load must be true; both options may be true.

    显示由操作系统测量的负载。此选项或--measured load必须为true;这两个选项都可以为true。

  • --passwd[=password], -p

    --密码[=密码],-p

    Property Value
    Command-Line Format --passwd[=password]
    Introduced 5.7.19-ndb-7.6.3
    Removed 5.7.20-ndb-7.6.4
    Type Boolean
    Default Value NULL

    Connect using this password.

    使用此密码连接。

    This option is deprecated in NDB 7.6.4. It is removed in NDB 7.6.6, where it is replaced by the --password option. (Bug #26907833)

    在ndb 7.6.4中不推荐使用此选项。它在ndb 7.6.6中被删除,并替换为--password选项。(错误26907833)

  • --password[=password], -p

    --密码[=密码],-p

    Property Value
    Command-Line Format --password[=password]
    Introduced 5.7.22-ndb-7.6.6
    Type Boolean
    Default Value NULL

    Connect using this password.

    使用此密码连接。

    This option was added in NDB 7.6.6 as a replacement for the --passwd option used previously. (Bug #26907833)

    这个选项是在ndb 7.6.6中添加的,作为先前使用的--passwd选项的替代。(错误26907833)

  • --port[=#], -t (NDB 7.6.6 and later: -P)

    --端口[=],-t(ndb 7.6.6及更高版本:-p)

    Property Value
    Command-Line Format --port[=#]
    Introduced 5.7.19-ndb-7.6.3
    Type Integer
    Default Value 3306

    Port number to use when connecting to MySQL Server.

    连接到MySQL服务器时要使用的端口号。

    Beginning with NDB 7.6.6, the short form for this option is -P, and -t is repurposed as the short form for the --text option. (Bug #26907833)

    从ndb 7.6.6开始,这个选项的缩写是-p,-t被重新用作--text选项的缩写。(错误26907833)

  • --sleep-time[=seconds], -s

    --睡眠时间[=秒],-s

    Property Value
    Command-Line Format --sleep-time[=seconds]
    Introduced 5.7.19-ndb-7.6.3
    Type Integer
    Default Value 1

    Time to wait between display refreshes, in seconds.

    显示刷新之间的等待时间(秒)。

  • --socket=path/to/file, -S

    --socket=path/to/file,-s

    Property Value
    Command-Line Format --socket
    Introduced 5.7.22-ndb-7.6.6
    Type Path name
    Default Value [none]

    Use the specified socket file for the connection.

    使用指定的套接字文件进行连接。

    Added in NDB 7.6.6. (Bug #86614, Bug #26236298)

    在ndb 7.6.6中添加。(错误86614,错误26236298)

  • --sort, -r

    --排序,-r

    Property Value
    Command-Line Format --sort
    Introduced 5.7.19-ndb-7.6.3
    Type Boolean
    Default Value TRUE

    Sort threads by usage; use --skip-sort to disable.

    按用法对线程排序;使用--skip sort禁用。

  • --text, -x (NDB 7.6.6 and later: -t)

    --文本,-x(ndb 7.6.6及更高版本:-t)

    Property Value
    Command-Line Format --text
    Introduced 5.7.19-ndb-7.6.3
    Type Boolean
    Default Value FALSE

    Display data using text. This option or --graph must be true; both options may be true.

    使用文本显示数据。此选项或--graph必须为true;两个选项都可以为true。

    Beginning with NDB 7.6.6, the short form for this option is -t and support for -x is removed. (Bug #26907833)

    从ndb 7.6.6开始,这个选项的缩写形式是-t,并删除了对-x的支持。(错误26907833)

  • --user[=name], -u

    --用户[=名称],-u

    Property Value
    Command-Line Format --user[=name]
    Introduced 5.7.19-ndb-7.6.3
    Type String
    Default Value root

    Connect as this MySQL user.

    作为这个mysql用户连接。

Sample Output.  The next figure shows ndb_top running in a terminal window on a Linux system with an ndbmtd data node under a moderate load. Here, the program has been invoked using ndb_top -n8 -x to provide both text and graph output:

样本输出。下一个图显示了在Linux系统的终端窗口中运行的ndb懔top,该系统在中等负载下有一个ndbmtd数据节点。在这里,使用ndb_top-n8-x调用程序以提供文本和图形输出:

Figure 21.37 ndb_top Running in Terminal

图21.37 ndb_顶部磨合终端

Display from ndb_top, running in a terminal window. Shows information for each node, including the utilized resources.

21.4.31 ndb_waiter — Wait for NDB Cluster to Reach a Given Status

ndb_waiter repeatedly (each 100 milliseconds) prints out the status of all cluster data nodes until either the cluster reaches a given status or the --timeout limit is exceeded, then exits. By default, it waits for the cluster to achieve STARTED status, in which all nodes have started and connected to the cluster. This can be overridden using the --no-contact and --not-started options.

NdByWaiste反复(每100毫秒)打印出所有群集数据节点的状态,直到群集达到给定状态或超出超时限制,然后退出。默认情况下,它等待集群达到启动状态,其中所有节点都已启动并连接到集群。这可以使用--no contact和--not started选项覆盖。

The node states reported by this utility are as follows:

此实用程序报告的节点状态如下:

  • NO_CONTACT: The node cannot be contacted.

    无联系人:无法联系该节点。

  • UNKNOWN: The node can be contacted, but its status is not yet known. Usually, this means that the node has received a START or RESTART command from the management server, but has not yet acted on it.

    未知:可以联系该节点,但其状态尚不可知。通常,这意味着节点已从管理服务器接收到启动或重新启动命令,但尚未对其执行操作。

  • NOT_STARTED: The node has stopped, but remains in contact with the cluster. This is seen when restarting the node using the management client's RESTART command.

    未启动:节点已停止,但仍与群集保持联系。这在使用管理客户端的restart命令重新启动节点时可见。

  • STARTING: The node's ndbd process has started, but the node has not yet joined the cluster.

    启动:节点的ndbd进程已启动,但节点尚未加入群集。

  • STARTED: The node is operational, and has joined the cluster.

    已启动:节点正在运行,并且已加入群集。

  • SHUTTING_DOWN: The node is shutting down.

    关闭:节点正在关闭。

  • SINGLE USER MODE: This is shown for all cluster data nodes when the cluster is in single user mode.

    单用户模式:当集群处于单用户模式时,所有集群数据节点都会显示此模式。

The following table includes options that are specific to the NDB Cluster native backup restoration program ndb_waiter. Additional descriptions follow the table. For options common to most NDB Cluster programs (including ndb_waiter), see Section 21.4.32, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

下表包含特定于ndb cluster native backup restoration program ndb_waiter的选项。其他说明见下表。有关大多数ndb群集程序(包括ndb_-waiter)的公用选项,请参阅21.4.32节,“ndb群集程序的公用选项-ndb群集程序的公用选项”。

Table 21.341 Command-line options for the ndb_waiter program

表21.341 ndb_waiter程序的命令行选项

Format Description Added, Deprecated, or Removed

--no-contact,

--没有联系,

-n

-n个

Wait for cluster to reach NO CONTACT state

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--not-started

--未开始

Wait for cluster to reach NOT STARTED state

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--single-user

--单用户

Wait for cluster to enter single user mode

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--timeout=#,

--超时=,

-t

-T型

Wait this many seconds, then exit whether or not cluster has reached desired state; default is 2 minutes (120 seconds)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--nowait-nodes=list

--nowait nodes=列表

List of nodes not to be waited for.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--wait-nodes=list,

--等待节点=列表,

-w

-西

List of nodes to be waited for.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


Usage

ndb_waiter [-c connection_string]

Additional Options

  • --no-contact, -n

    --没有联系,-n

    Instead of waiting for the STARTED state, ndb_waiter continues running until the cluster reaches NO_CONTACT status before exiting.

    在等待退出之前,NBDWAITER继续等待运行,直到群集到达No1联系状态。

  • --not-started

    --未开始

    Instead of waiting for the STARTED state, ndb_waiter continues running until the cluster reaches NOT_STARTED status before exiting.

    在等待退出之前,NBDWAITER继续等待运行,直到群集到达NoTo启动状态。

  • --timeout=seconds, -t seconds

    --超时=秒,-t秒

    Time to wait. The program exits if the desired state is not achieved within this number of seconds. The default is 120 seconds (1200 reporting cycles).

    是时候等了。如果在几秒钟内没有达到期望的状态,程序就会退出。默认值为120秒(1200个报告周期)。

  • --single-user

    --单用户

    The program waits for the cluster to enter single user mode.

    程序等待群集进入单用户模式。

  • --nowait-nodes=list

    --nowait nodes=列表

    When this option is used, ndb_waiter does not wait for the nodes whose IDs are listed. The list is comma-delimited; ranges can be indicated by dashes, as shown here:

    使用此选项时,ndb_waiter不会等待列出其id的节点。列表以逗号分隔;范围可以用破折号表示,如下所示:

    shell> ndb_waiter --nowait-nodes=1,3,7-9
    
    Important

    Do not use this option together with the --wait-nodes option.

    不要将此选项与--wait nodes选项一起使用。

  • --wait-nodes=list, -w list

    --等待节点=列表,-w列表

    When this option is used, ndb_waiter waits only for the nodes whose IDs are listed. The list is comma-delimited; ranges can be indicated by dashes, as shown here:

    使用此选项时,ndb_waiter只等待列出其id的节点。列表以逗号分隔;范围可以用破折号表示,如下所示:

    shell> ndb_waiter --wait-nodes=2,4-6,10
    
    Important

    Do not use this option together with the --nowait-nodes option.

    不要将此选项与--nowait nodes选项一起使用。

Sample Output.  Shown here is the output from ndb_waiter when run against a 4-node cluster in which two nodes have been shut down and then started again manually. Duplicate reports (indicated by ...) are omitted.

样本输出。这里显示的是在4节点集群上运行时ndb_waiter的输出,其中两个节点已关闭,然后再次手动启动。省略重复报告(由…表示)。

shell> ./ndb_waiter -c localhost

Connecting to mgmsrv at (localhost)
State node 1 STARTED
State node 2 NO_CONTACT
State node 3 STARTED
State node 4 NO_CONTACT
Waiting for cluster enter state STARTED

...

State node 1 STARTED
State node 2 UNKNOWN
State node 3 STARTED
State node 4 NO_CONTACT
Waiting for cluster enter state STARTED

...

State node 1 STARTED
State node 2 STARTING
State node 3 STARTED
State node 4 NO_CONTACT
Waiting for cluster enter state STARTED

...

State node 1 STARTED
State node 2 STARTING
State node 3 STARTED
State node 4 UNKNOWN
Waiting for cluster enter state STARTED

...

State node 1 STARTED
State node 2 STARTING
State node 3 STARTED
State node 4 STARTING
Waiting for cluster enter state STARTED

...

State node 1 STARTED
State node 2 STARTED
State node 3 STARTED
State node 4 STARTING
Waiting for cluster enter state STARTED

...

State node 1 STARTED
State node 2 STARTED
State node 3 STARTED
State node 4 STARTED
Waiting for cluster enter state STARTED

NDBT_ProgramExit: 0 - OK
Note

If no connection string is specified, then ndb_waiter tries to connect to a management on localhost, and reports Connecting to mgmsrv at (null).

如果未指定连接字符串,则ndb_waiter尝试连接到本地主机上的管理,并报告连接到mgmsrv的位置(空)。

21.4.32 Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs

All NDB Cluster programs accept the options described in this section, with the following exceptions:

所有ndb群集程序都接受本节中描述的选项,但以下情况除外:

Note

Users of earlier NDB Cluster versions should note that some of these options have been changed to make them consistent with one another, and also with mysqld. You can use the --help option with any NDB Cluster program—with the exception of ndb_print_backup_file, ndb_print_schema_file, and ndb_print_sys_file—to view a list of the options which the program supports.

早期ndb集群版本的用户应该注意,这些选项中的一些已经更改,以使它们彼此一致,并且与mysqld一致。对于任何ndb群集程序(ndb_print_backup_file、ndb_print_schema_file和ndb_print_sys_file除外),都可以使用--help选项查看程序支持的选项列表。

The options in the following table are common to all NDB Cluster executables (except those noted previously in this section).

下表中的选项是所有ndb集群可执行文件(本节前面提到的除外)的公用选项。

Table 21.342 Command-line options common to all MySQL NDB Cluster programs

表21.342所有mysql ndb集群程序通用的命令行选项

Format Description Added, Deprecated, or Removed

--character-sets-dir=dir_name

--字符集dir=目录名

Directory where character sets are installed

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--connect-retries=#

--连接重试次数=#

Set the number of times to retry a connection before giving up

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--connect-retry-delay=#

--连接重试延迟=#

Time to wait between attempts to contact a management server, in seconds

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--core-file

--核心文件

Write core on errors (defaults to TRUE in debug builds)

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--debug=options

--调试=选项

Enable output from debug calls. Can be used only for versions compiled with debugging enabled

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--defaults-extra-file=filename

--默认额外文件=文件名

Read this file after global option files are read

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--defaults-file=filename

--默认文件=文件名

Read default options from this file

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--defaults-group-suffix

--默认组后缀

Also read groups with names ending in this suffix

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--help,

--救命啊,

--usage,

--用法,

-?

-是吗?

Display help message and exit

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--login-path=path

--login path=路径

Read this path from the login file

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--ndb-connectstring=connectstring,

--ndb connectstring=连接字符串,

--connect-string=connectstring,

--connect string=连接字符串,

-c

-C类

Set connection string for connecting to ndb_mgmd. Syntax: [nodeid=<id>;][host=]<hostname>[:<port>]. Overrides entries specified in NDB_CONNECTSTRING or my.cnf.

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--ndb-mgmd-host=host[:port]

--ndb mgmd host=主机[:端口]

Set the host (and port, if desired) for connecting to management server

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--ndb-nodeid=#

--NDB节点ID=#

Set node id for this node

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--ndb-optimized-node-selection

--ndb优化节点选择

Select nodes for transactions in a more optimal way

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--no-defaults

--无默认值

Do not read default options from any option file other than login file

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--print-defaults

--打印默认值

Print the program argument list and exit

All MySQL 5.7 based releases

所有基于mysql 5.7的版本

--version,

--版本,

-V

-五

Output version information and exit

All MySQL 5.7 based releases

所有基于mysql 5.7的版本


For options specific to individual NDB Cluster programs, see Section 21.4, “NDB Cluster Programs”.

有关个别ndb群集程序的特定选项,请参阅21.4节“ndb群集程序”。

See Section 21.3.3.9.1, “MySQL Server Options for NDB Cluster”, for mysqld options relating to NDB Cluster.

有关ndb集群的mysqld选项,请参见21.3.3.9.1节“ndb集群的mysql服务器选项”。

  • --character-sets-dir=name

    --字符集dir=名称

    Property Value
    Command-Line Format --character-sets-dir=dir_name
    Type Directory name
    Default Value

    Tells the program where to find character set information.

    告诉程序在何处查找字符集信息。

    This option is supported by ndb_import in NDB 7.6.7 and later.

    ndb 7.6.7及更高版本中的ndb_import支持此选项。

  • --connect-retries=#

    --连接重试次数=#

    Property Value
    Command-Line Format --connect-retries=#
    Type Numeric
    Default Value 12
    Minimum Value 0
    Maximum Value 4294967295

    This option specifies the number of times following the first attempt to retry a connection before giving up (the client always tries the connection at least once). The length of time to wait per attempt is set using --connect-retry-delay.

    此选项指定在放弃前第一次尝试重试连接后的次数(客户端总是至少尝试一次连接)。每次尝试等待的时间长度是使用--connect retry delay设置的。

    Note

    When used with ndb_mgm, this option has 3 as its default. See Section 21.4.5, “ndb_mgm — The NDB Cluster Management Client”, for more information.

    与ndb_-mgm一起使用时,此选项的默认值为3。有关更多信息,请参阅21.4.5节,“ndb_mgm-ndb群集管理客户端”。

  • --connect-retry-delay=#

    --连接重试延迟=#

    Property Value
    Command-Line Format --connect-retry-delay=#
    Type Numeric
    Default Value 5
    Minimum Value (>= 5.7.10-ndb-7.5.0) 1
    Minimum Value 0
    Maximum Value 4294967295

    This option specifies the length of time to wait per attempt a connection before giving up. The number of times to try connecting is set by --connect-retries.

    此选项指定每次尝试连接时在放弃之前等待的时间长度。尝试连接的次数由--connect retries设置。

  • --core-file

    --核心文件

    Property Value
    Command-Line Format --core-file
    Type Boolean
    Default Value FALSE

    Write a core file if the program dies. The name and location of the core file are system-dependent. (For NDB Cluster programs nodes running on Linux, the default location is the program's working directory—for a data node, this is the node's DataDir.) For some systems, there may be restrictions or limitations. For example, it might be necessary to execute ulimit -c unlimited before starting the server. Consult your system documentation for detailed information.

    如果程序死了,写一个核心文件。核心文件的名称和位置取决于系统。(对于在Linux上运行的NDB群集程序节点,默认位置是数据节点的程序工作目录,这是节点的数据目录。)对于某些系统,可能有限制或限制。例如,可能需要在启动服务器之前执行ulimit-c unlimited。有关详细信息,请参阅系统文档。

    If NDB Cluster was built using the --debug option for configure, then --core-file is enabled by default. For regular builds, --core-file is disabled by default.

    如果ndb集群是使用--debug选项进行配置构建的,那么--core文件在默认情况下是启用的。对于常规生成,-core文件在默认情况下被禁用。

  • --debug[=options]

    --调试[=选项]

    Property Value
    Command-Line Format --debug=options
    Type String
    Default Value d:t:O,/tmp/ndb_restore.trace

    This option can be used only for versions compiled with debugging enabled. It is used to enable output from debug calls in the same manner as for the mysqld process.

    此选项只能用于启用调试的编译版本。它用于以与mysqld进程相同的方式启用调试调用的输出。

  • --defaults-extra-file=filename

    --默认额外文件=文件名

    Property Value
    Command-Line Format --defaults-extra-file=filename
    Type String
    Default Value [none]

    Read this file after global option files are read.

    读取全局选项文件后读取此文件。

    For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”.

    有关此选项和其他选项文件选项的其他信息,请参见第4.2.2.3节“影响选项文件处理的命令行选项”。

  • --defaults-file=filename

    --默认文件=文件名

    Property Value
    Command-Line Format --defaults-file=filename
    Type String
    Default Value [none]

    Read default options from this file.

    从该文件中读取默认选项。

    For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”.

    有关此选项和其他选项文件选项的其他信息,请参见第4.2.2.3节“影响选项文件处理的命令行选项”。

  • --defaults-group-suffix

    --默认组后缀

    Property Value
    Command-Line Format --defaults-group-suffix
    Type String
    Default Value [none]

    Also read groups with names ending in this suffix.

    同时阅读以该后缀结尾的名称的组。

    For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”.

    有关此选项和其他选项文件选项的其他信息,请参见第4.2.2.3节“影响选项文件处理的命令行选项”。

  • --help, --usage, -?

    --帮助,--用法,-?

    Property Value
    Command-Line Format

    --help

    --帮助

    --usage

    --用法

    Prints a short list with descriptions of the available command options.

    打印包含可用命令选项说明的简短列表。

  • --login-path=path

    --login path=路径

    Property Value
    Command-Line Format --login-path=path
    Type String
    Default Value [none]

    Read this path from the login file.

    从登录文件中读取此路径。

    For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”.

    有关此选项和其他选项文件选项的其他信息,请参见第4.2.2.3节“影响选项文件处理的命令行选项”。

  • --ndb-connectstring=connection_string, --connect-string=connection_string, -c connection_string

    --ndb connect string=连接字符串,--连接字符串=连接字符串,-c连接字符串

    Property Value
    Command-Line Format

    --ndb-connectstring=connectstring

    --ndb connectstring=连接字符串

    --connect-string=connectstring

    --connect string=连接字符串

    Type String
    Default Value localhost:1186

    This option takes an NDB Cluster connection string that specifies the management server for the application to connect to, as shown here:

    此选项采用ndb群集连接字符串,该字符串指定应用程序要连接到的管理服务器,如下所示:

    shell> ndbd --ndb-connectstring="nodeid=2;host=ndb_mgmd.mysql.com:1186"
    

    For more information, see Section 21.3.3.3, “NDB Cluster Connection Strings”.

    有关更多信息,请参阅第21.3.3.3节“ndb群集连接字符串”。

  • --ndb-mgmd-host=host[:port]

    --ndb mgmd host=主机[:端口]

    Property Value
    Command-Line Format --ndb-mgmd-host=host[:port]
    Type String
    Default Value localhost:1186

    Can be used to set the host and port number of a single management server for the program to connect to. If the program requires node IDs or references to multiple management servers (or both) in its connection information, use the --ndb-connectstring option instead.

    可用于设置程序连接到的单个管理服务器的主机和端口号。如果程序在其连接信息中需要节点ID或对多个管理服务器(或两者)的引用,请改用--ndb connectstring选项。

  • --ndb-nodeid=#

    --NDB节点ID=#

    Property Value
    Command-Line Format --ndb-nodeid=#
    Type Numeric
    Default Value 0

    Sets this node's NDB Cluster node ID. The range of permitted values depends on the node's type (data, management, or API) and the NDB Cluster software version. See Section 21.1.7.2, “Limits and Differences of NDB Cluster from Standard MySQL Limits”, for more information.

    设置此节点的NDB群集节点ID。允许的值范围取决于节点的类型(数据、管理或API)和NDB群集软件版本。有关更多信息,请参阅21.1.7.2节,“ndb集群与标准mysql限制的限制和区别”。

  • --no-defaults

    --无默认值

    Property Value
    Command-Line Format --no-defaults
    Type Boolean
    Default Value TRUE

    Do not read default options from any option file other than login file.

    不要从登录文件以外的任何选项文件中读取默认选项。

    For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”.

    有关此选项和其他选项文件选项的其他信息,请参见第4.2.2.3节“影响选项文件处理的命令行选项”。

  • --ndb-optimized-node-selection

    --ndb优化节点选择

    Property Value
    Command-Line Format --ndb-optimized-node-selection
    Type Boolean
    Default Value TRUE

    Optimize selection of nodes for transactions. Enabled by default.

    优化事务的节点选择。默认情况下启用。

  • --print-defaults

    --打印默认值

    Property Value
    Command-Line Format --print-defaults
    Type Boolean
    Default Value TRUE

    Print the program argument list and exit.

    打印程序参数列表并退出。

    For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”.

    有关此选项和其他选项文件选项的其他信息,请参见第4.2.2.3节“影响选项文件处理的命令行选项”。

  • --version, -V

    --版本,-v

    Property Value
    Command-Line Format --version

    Prints the NDB Cluster version number of the executable. The version number is relevant because not all versions can be used together, and the NDB Cluster startup process verifies that the versions of the binaries being used can co-exist in the same cluster. This is also important when performing an online (rolling) software upgrade or downgrade of NDB Cluster.

    打印可执行文件的ndb群集版本号。版本号是相关的,因为并非所有版本都可以一起使用,NDB集群启动过程验证所使用的二进制文件的版本可以共存于同一个集群中。这在执行ndb集群的在线(滚动)软件升级或降级时也很重要。

    See Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”), for more information.

    有关更多信息,请参阅第21.5.5节“执行ndb集群的滚动重启”。

21.5 Management of NDB Cluster

Managing an NDB Cluster involves a number of tasks, the first of which is to configure and start NDB Cluster. This is covered in Section 21.3, “Configuration of NDB Cluster”, and Section 21.4, “NDB Cluster Programs”.

管理ndb集群涉及许多任务,其中第一个任务是配置和启动ndb集群。这将在第21.3节“ndb集群的配置”和第21.4节“ndb集群程序”中介绍。

The next few sections cover the management of a running NDB Cluster.

接下来的几节将介绍正在运行的ndb集群的管理。

For information about security issues relating to management and deployment of an NDB Cluster, see Section 21.5.12, “NDB Cluster Security Issues”.

有关与管理和部署ndb群集相关的安全问题的信息,请参阅第21.5.12节“ndb群集安全问题”。

There are essentially two methods of actively managing a running NDB Cluster. The first of these is through the use of commands entered into the management client whereby cluster status can be checked, log levels changed, backups started and stopped, and nodes stopped and started. The second method involves studying the contents of the cluster log ndb_node_id_cluster.log; this is usually found in the management server's DataDir directory, but this location can be overridden using the LogDestination option. (Recall that node_id represents the unique identifier of the node whose activity is being logged.) The cluster log contains event reports generated by ndbd. It is also possible to send cluster log entries to a Unix system log.

主动管理正在运行的ndb集群有两种方法。第一种方法是使用输入到管理客户端的命令,通过这些命令可以检查群集状态、更改日志级别、启动和停止备份以及停止和启动节点。第二种方法涉及研究集群日志ndb_node_id_cluster.log的内容;这通常在管理服务器的datadir目录中找到,但是可以使用logdestination选项覆盖此位置。(回想一下,node_id表示正在记录其活动的节点的唯一标识符。)集群日志包含由ndbd生成的事件报告。还可以将集群日志条目发送到unix系统日志。

Some aspects of the cluster's operation can be also be monitored from an SQL node using the SHOW ENGINE NDB STATUS statement.

还可以使用show engine ndb status语句从sql节点监视集群操作的某些方面。

More detailed information about NDB Cluster operations is available in real time through an SQL interface using the ndbinfo database. For more information, see Section 21.5.10, “ndbinfo: The NDB Cluster Information Database”.

通过使用ndbinfo数据库的sql接口,可以实时获得有关ndb集群操作的更多详细信息。有关更多信息,请参阅21.5.10节,“ndbinfo:ndb集群信息数据库”。

NDB statistics counters provide improved monitoring using the mysql client. These counters, implemented in the NDB kernel, relate to operations performed by or affecting Ndb objects, such as starting, closing, and aborting transactions; primary key and unique key operations; table, range, and pruned scans; blocked threads waiting for various operations to complete; and data and events sent and received by NDB Cluster. The counters are incremented by the NDB kernel whenever NDB API calls are made or data is sent to or received by the data nodes.

ndb统计计数器使用mysql客户端提供了改进的监视。这些计数器在ndb内核中实现,与由ndb对象执行或影响ndb对象的操作有关,例如启动、关闭和中止事务;主键和唯一键操作;表、范围和修剪的扫描;等待各种操作完成的阻塞线程;ndb群集发送和接收的数据和事件。每当进行ndb api调用或向数据节点发送或接收数据时,ndb内核将递增计数器。

mysqld exposes the NDB API statistics counters as system status variables, which can be identified from the prefix common to all of their names (Ndb_api_). The values of these variables can be read in the mysql client from the output of a SHOW STATUS statement, or by querying either the SESSION_STATUS table or the GLOBAL_STATUS table (in the INFORMATION_SCHEMA database). By comparing the values of the status variables before and after the execution of an SQL statement that acts on NDB tables, you can observe the actions taken on the NDB API level that correspond to this statement, which can be beneficial for monitoring and performance tuning of NDB Cluster.

mysqld将ndb api统计计数器公开为系统状态变量,可以从所有名称(ndb_api_u)的公共前缀中识别这些变量。这些变量的值可以在mysql客户端从show status语句的输出中读取,也可以通过查询session_status表或全局_status表(在information_schema数据库中)来读取。通过比较作用于ndb表的sql语句执行前后的状态变量值,可以观察到与此语句相对应的ndb api级别上执行的操作,这有利于ndb集群的监视和性能调整。

MySQL Cluster Manager provides an advanced command-line interface that simplifies many otherwise complex NDB Cluster management tasks, such as starting, stopping, or restarting an NDB Cluster with a large number of nodes. The MySQL Cluster Manager client also supports commands for getting and setting the values of most node configuration parameters as well as mysqld server options and variables relating to NDB Cluster. See MySQL™ Cluster Manager 1.4.7 User Manual, for more information.

mysql cluster manager提供了一个高级命令行界面,简化了许多其他复杂的ndb集群管理任务,例如启动、停止或重新启动具有大量节点的ndb集群。mysql cluster manager客户端还支持用于获取和设置大多数节点配置参数值的命令,以及与ndb集群相关的mysqld服务器选项和变量。有关详细信息,请参阅MySQL™Cluster Manager 1.4.7用户手册。

21.5.1 Summary of NDB Cluster Start Phases

This section provides a simplified outline of the steps involved when NDB Cluster data nodes are started. More complete information can be found in NDB Cluster Start Phases, in the NDB Internals Guide.

本节提供了启动ndb集群数据节点时所涉及步骤的简化概要。更完整的信息可以在ndb集群启动阶段的ndb内部指南中找到。

These phases are the same as those reported in the output from the node_id STATUS command in the management client (see Section 21.5.2, “Commands in the NDB Cluster Management Client”). These start phases are also reported in the start_phase column of the ndbinfo.nodes table.

这些阶段与管理客户机中node_id status命令的输出中报告的阶段相同(请参阅第21.5.2节“ndb集群管理客户机中的命令”)。这些启动阶段也会在ndbinfo.nodes表的start_phase列中报告。

Start types.  There are several different startup types and modes, as shown in the following list:

开始类型。有几种不同的启动类型和模式,如下表所示:

  • Initial start.  The cluster starts with a clean file system on all data nodes. This occurs either when the cluster started for the very first time, or when all data nodes are restarted using the --initial option.

    初始启动。集群从所有数据节点上的干净文件系统开始。当集群第一次启动时,或者当使用--initial选项重新启动所有数据节点时,都会发生这种情况。

    Note

    Disk Data files are not removed when restarting a node using --initial.

    使用--initial重新启动节点时,不会删除磁盘数据文件。

  • System restart.  The cluster starts and reads data stored in the data nodes. This occurs when the cluster has been shut down after having been in use, when it is desired for the cluster to resume operations from the point where it left off.

    系统重新启动。集群启动并读取存储在数据节点中的数据。当集群在使用后被关闭时,当集群希望从其停止的点恢复操作时,就会发生这种情况。

  • Node restart.  This is the online restart of a cluster node while the cluster itself is running.

    节点重新启动。这是群集自身运行时群集节点的联机重新启动。

  • Initial node restart.  This is the same as a node restart, except that the node is reinitialized and started with a clean file system.

    初始节点重新启动。这与重新启动节点相同,只是重新初始化节点并使用干净的文件系统启动。

Setup and initialization (phase -1).  Prior to startup, each data node (ndbd process) must be initialized. Initialization consists of the following steps:

设置和初始化(阶段-1)。在启动之前,必须初始化每个数据节点(ndbd进程)。初始化包括以下步骤:

  1. Obtain a node ID

    获取节点ID

  2. Fetch configuration data

    获取配置数据

  3. Allocate ports to be used for inter-node communications

    分配用于节点间通信的端口

  4. Allocate memory according to settings obtained from the configuration file

    根据从配置文件获得的设置分配内存

When a data node or SQL node first connects to the management node, it reserves a cluster node ID. To make sure that no other node allocates the same node ID, this ID is retained until the node has managed to connect to the cluster and at least one ndbd reports that this node is connected. This retention of the node ID is guarded by the connection between the node in question and ndb_mgmd.

当数据节点或SQL节点第一次连接到管理节点时,它会保留一个群集节点ID。为了确保没有其他节点分配相同的节点ID,在该节点成功连接到群集并且至少有一个NDBD报告该节点已连接之前,将保留该ID。该节点id的保留由所述节点与ndb-mgmd之间的连接保护。

After each data node has been initialized, the cluster startup process can proceed. The stages which the cluster goes through during this process are listed here:

初始化每个数据节点后,群集启动过程可以继续。在此过程中群集所经历的阶段如下所示:

  • Phase 0.  The NDBFS and NDBCNTR blocks start (see NDB Kernel Blocks). Data node file systems are cleared on those data nodes that were started with --initial option.

    第0阶段。ndbfs和ndbcntr块启动(请参阅ndb内核块)。在那些以--initial选项启动的数据节点上清除数据节点文件系统。

  • Phase 1.  In this stage, all remaining NDB kernel blocks are started. NDB Cluster connections are set up, inter-block communications are established, and heartbeats are started. In the case of a node restart, API node connections are also checked.

    第一阶段。在此阶段,所有剩余的ndb内核块都将启动。建立了ndb集群连接,建立了块间通信,并启动了心跳。在节点重新启动的情况下,还会检查api节点连接。

    Note

    When one or more nodes hang in Phase 1 while the remaining node or nodes hang in Phase 2, this often indicates network problems. One possible cause of such issues is one or more cluster hosts having multiple network interfaces. Another common source of problems causing this condition is the blocking of TCP/IP ports needed for communications between cluster nodes. In the latter case, this is often due to a misconfigured firewall.

    当一个或多个节点在阶段1中挂起,而剩余的一个或多个节点在阶段2中挂起时,这通常表示存在网络问题。此类问题的一个可能原因是一个或多个具有多个网络接口的群集主机。导致这种情况的另一个常见问题是群集节点之间通信所需的TCP/IP端口被阻塞。在后一种情况下,这通常是由于防火墙配置错误造成的。

  • Phase 2.  The NDBCNTR kernel block checks the states of all existing nodes. The master node is chosen, and the cluster schema file is initialized.

    第二阶段。NDCNCR内核块检查所有现有节点的状态。选择主节点,并初始化群集架构文件。

  • Phase 3.  The DBLQH and DBTC kernel blocks set up communications between them. The startup type is determined; if this is a restart, the DBDIH block obtains permission to perform the restart.

    第三阶段。dblqh和dbtc内核块在它们之间建立通信。启动类型已确定;如果这是重新启动,dbdih块将获得执行重新启动的权限。

  • Phase 4.  For an initial start or initial node restart, the redo log files are created. The number of these files is equal to NoOfFragmentLogFiles.

    第四阶段。对于初始启动或初始节点重新启动,将创建重做日志文件。这些文件的数量等于noofframgentlogfiles。

    For a system restart:

    对于系统重新启动:

    • Read schema or schemas.

      读取一个或多个架构。

    • Read data from the local checkpoint.

      从本地检查点读取数据。

    • Apply all redo information until the latest restorable global checkpoint has been reached.

      应用所有重做信息,直到达到最新的可恢复全局检查点。

    For a node restart, find the tail of the redo log.

    对于节点重新启动,请查找重做日志的尾部。

  • Phase 5.  Most of the database-related portion of a data node start is performed during this phase. For an initial start or system restart, a local checkpoint is executed, followed by a global checkpoint. Periodic checks of memory usage begin during this phase, and any required node takeovers are performed.

    第5阶段。数据节点启动的大部分与数据库相关的部分都在这个阶段执行。对于初始启动或系统重新启动,将执行本地检查点,然后执行全局检查点。在此阶段开始定期检查内存使用情况,并执行任何必需的节点接管。

  • Phase 6.  In this phase, node groups are defined and set up.

    第6阶段。在此阶段,将定义和设置节点组。

  • Phase 7.  The arbitrator node is selected and begins to function. The next backup ID is set, as is the backup disk write speed. Nodes reaching this start phase are marked as Started. It is now possible for API nodes (including SQL nodes) to connect to the cluster.

    第7阶段。仲裁器节点被选中并开始工作。设置下一个备份ID,以及备份磁盘的写入速度。到达此启动阶段的节点被标记为已启动。现在,api节点(包括sql节点)可以连接到集群。

  • Phase 8.  If this is a system restart, all indexes are rebuilt (by DBDIH).

    第8阶段。如果这是系统重新启动,则重建所有索引(通过dbdih)。

  • Phase 9.  The node internal startup variables are reset.

    第9阶段。节点内部启动变量被重置。

  • Phase 100 (OBSOLETE).  Formerly, it was at this point during a node restart or initial node restart that API nodes could connect to the node and begin to receive events. Currently, this phase is empty.

    阶段100(过时)。以前,在节点重新启动或初始节点重新启动期间,api节点可以连接到该节点并开始接收事件。当前,此阶段为空。

  • Phase 101.  At this point in a node restart or initial node restart, event delivery is handed over to the node joining the cluster. The newly-joined node takes over responsibility for delivering its primary data to subscribers. This phase is also referred to as SUMA handover phase.

    第101阶段。此时,在节点重新启动或初始节点重新启动时,事件传递将移交给加入群集的节点。新加入的节点负责将其主数据传递给订阅服务器。此阶段也称为SUMA移交阶段。

After this process is completed for an initial start or system restart, transaction handling is enabled. For a node restart or initial node restart, completion of the startup process means that the node may now act as a transaction coordinator.

在首次启动或系统重新启动完成此过程后,将启用事务处理。对于节点重新启动或初始节点重新启动,完成启动过程意味着节点现在可以充当事务协调器。

21.5.2 Commands in the NDB Cluster Management Client

In addition to the central configuration file, a cluster may also be controlled through a command-line interface available through the management client ndb_mgm. This is the primary administrative interface to a running cluster.

除了中央配置文件外,还可以通过管理客户端ndb_-mgm提供的命令行界面来控制集群。这是正在运行的群集的主管理接口。

Commands for the event logs are given in Section 21.5.6, “Event Reports Generated in NDB Cluster”; commands for creating backups and restoring from them are provided in Section 21.5.3, “Online Backup of NDB Cluster”.

事件日志的命令在21.5.6节“在ndb集群中生成的事件报告”中给出;创建备份并从中恢复的命令在21.5.3节“ndb集群的在线备份”中给出。

Using ndb_mgm with MySQL Cluster Manager.  MySQL Cluster Manager handles starting and stopping processes and tracks their states internally, so it is not necessary to use ndb_mgm for these tasks for an NDB Cluster that is under MySQL Cluster Manager control. it is recommended not to use the ndb_mgm command-line client that comes with the NDB Cluster distribution to perform operations that involve starting or stopping nodes. These include but are not limited to the START, STOP, RESTART, and SHUTDOWN commands. For more information, see MySQL Cluster Manager Process Commands.

在mysql集群管理器中使用ndb_-mgm。mysql cluster manager处理启动和停止进程,并在内部跟踪它们的状态,因此对于受mysql cluster manager控制的ndb集群,不必使用ndb-mgm执行这些任务。建议不要使用ndb集群分发版附带的ndb-mgm命令行客户机执行涉及启动或停止节点的操作。这些命令包括但不限于start、stop、restart和shutdown命令。有关详细信息,请参见mysql cluster manager process commands。

The management client has the following basic commands. In the listing that follows, node_id denotes either a data node ID or the keyword ALL, which indicates that the command should be applied to all of the cluster's data nodes.

管理客户端有以下基本命令。在下面的列表中,node_id表示数据节点id或关键字all,这表示该命令应应用于集群的所有数据节点。

  • HELP

    帮助

    Displays information on all available commands.

    显示所有可用命令的信息。

  • CONNECT connection-string

    连接连接字符串

    Connects to the management server indicated by the connection string. If the client is already connected to this server, the client reconnects.

    连接到由连接字符串指示的管理服务器。如果客户端已连接到此服务器,则客户端将重新连接。

  • SHOW

    显示

    Displays information on the cluster's status. Possible node status values include UNKNOWN, NO_CONTACT, NOT_STARTED, STARTING, STARTED, SHUTTING_DOWN, and RESTARTING. The output from this command also indicates when the cluster is in single user mode (status SINGLE USER MODE).

    显示有关群集状态的信息。可能的节点状态值包括“未知”、“无联系人”、“未启动”、“启动”、“启动”、“关闭”和“重新启动”。此命令的输出还指示集群何时处于单用户模式(状态单用户模式)。

  • node_id START

    节点ID开始

    Brings online the data node identified by node_id (or all data nodes).

    使由节点id(或所有数据节点)标识的数据节点联机。

    ALL START works on all data nodes only, and does not affect management nodes.

    all start只在所有数据节点上工作,不影响管理节点。

    Important

    To use this command to bring a data node online, the data node must have been started using --nostart or -n.

    要使用此命令使数据节点联机,数据节点必须使用--nostart或-n启动。

  • node_id STOP [-a] [-f]

    节点ID停止[-a][-f]

    Stops the data or management node identified by node_id.

    停止由节点ID标识的数据或管理节点。

    Note

    ALL STOP works to stop all data nodes only, and does not affect management nodes.

    all stop仅用于停止所有数据节点,不影响管理节点。

    A node affected by this command disconnects from the cluster, and its associated ndbd or ndb_mgmd process terminates.

    受此命令影响的节点将断开与群集的连接,其关联的ndbd或ndb-mgmd进程将终止。

    The -a option causes the node to be stopped immediately, without waiting for the completion of any pending transactions.

    -a选项会导致节点立即停止,而不等待任何挂起事务的完成。

    Normally, STOP fails if the result would cause an incomplete cluster. The -f option forces the node to shut down without checking for this. If this option is used and the result is an incomplete cluster, the cluster immediately shuts down.

    通常,如果结果会导致群集不完整,则停止失败。f选项强制节点关闭,而不检查此选项。如果使用此选项,结果是群集不完整,则群集将立即关闭。

    Warning

    Use of the -a option also disables the safety check otherwise performed when STOP is invoked to insure that stopping the node does not cause an incomplete cluster. In other words, you should exercise extreme care when using the -a option with the STOP command, due to the fact that this option makes it possible for the cluster to undergo a forced shutdown because it no longer has a complete copy of all data stored in NDB.

    使用-a选项还将禁用在调用stop时执行的安全检查,以确保停止节点不会导致不完整的集群。换句话说,当使用-a选项和stop命令时,您应该格外小心,因为这个选项使集群能够进行强制关闭,因为它不再具有存储在ndb中的所有数据的完整副本。

  • node_id RESTART [-n] [-i] [-a] [-f]

    节点ID重新启动[-n][-i][-a][-f]

    Restarts the data node identified by node_id (or all data nodes).

    重新启动由节点id(或所有数据节点)标识的数据节点。

    Using the -i option with RESTART causes the data node to perform an initial restart; that is, the node's file system is deleted and recreated. The effect is the same as that obtained from stopping the data node process and then starting it again using ndbd --initial from the system shell.

    对restart使用-i选项将导致数据节点执行初始重新启动;也就是说,节点的文件系统将被删除并重新创建。其效果与停止数据节点进程,然后使用ndbd再次启动该进程所获得的效果相同,ndbd是来自系统外壳的初始值。

    Note

    Backup files and Disk Data files are not removed when this option is used.

    使用此选项时,不会删除备份文件和磁盘数据文件。

    Using the -n option causes the data node process to be restarted, but the data node is not actually brought online until the appropriate START command is issued. The effect of this option is the same as that obtained from stopping the data node and then starting it again using ndbd --nostart or ndbd -n from the system shell.

    使用-n选项会导致数据节点进程重新启动,但在发出相应的启动命令之前,数据节点实际上不会联机。此选项的效果与停止数据节点,然后使用系统shell中的ndbd--nostart或ndbd-n再次启动数据节点所获得的效果相同。

    Using the -a causes all current transactions relying on this node to be aborted. No GCP check is done when the node rejoins the cluster.

    使用-a会导致依赖于此节点的所有当前事务被中止。当节点重新加入集群时,不会执行gcp检查。

    Normally, RESTART fails if taking the node offline would result in an incomplete cluster. The -f option forces the node to restart without checking for this. If this option is used and the result is an incomplete cluster, the entire cluster is restarted.

    通常,如果使节点脱机会导致群集不完整,则重新启动失败。f选项强制节点重新启动,而不检查此选项。如果使用此选项,结果是群集不完整,则重新启动整个群集。

  • node_id STATUS

    节点ID状态

    Displays status information for the data node identified by node_id (or for all data nodes).

    显示由节点id标识的数据节点(或所有数据节点)的状态信息。

    The output from this command also indicates when the cluster is in single user mode.

    此命令的输出还指示群集何时处于单用户模式。

  • node_id REPORT report-type

    节点ID报表类型

    Displays a report of type report-type for the data node identified by node_id, or for all data nodes using ALL.

    显示由节点id标识的数据节点的报表类型,或显示使用all的所有数据节点的报表类型。

    Currently, there are three accepted values for report-type:

    目前,报表类型有三个可接受的值:

    • BackupStatus provides a status report on a cluster backup in progress

      backup status提供正在进行的群集备份的状态报告

    • MemoryUsage displays how much data memory and index memory is being used by each data node as shown in this example:

      MemoryUsage显示每个数据节点正在使用的数据内存和索引内存量,如本例所示:

      ndb_mgm> ALL REPORT MEMORY
      
      Node 1: Data usage is 5%(177 32K pages of total 3200)
      Node 1: Index usage is 0%(108 8K pages of total 12832)
      Node 2: Data usage is 5%(177 32K pages of total 3200)
      Node 2: Index usage is 0%(108 8K pages of total 12832)
      

      This information is also available from the ndbinfo.memoryusage table.

      此信息也可以从ndbinfo.memoryusage表中获得。

    • EventLog reports events from the event log buffers of one or more data nodes.

      event log从一个或多个数据节点的事件日志缓冲区报告事件。

    report-type is case-insensitive and fuzzy; for MemoryUsage, you can use MEMORY (as shown in the prior example), memory, or even simply MEM (or mem). You can abbreviate BackupStatus in a similar fashion.

    报表类型不区分大小写且“模糊”;对于memoryUsage,可以使用memory(如前一个示例所示)、memory,甚至只使用mem(或mem)。你可以用类似的方式缩写backupstatus。

  • ENTER SINGLE USER MODE node_id

    输入单用户模式节点ID

    Enters single user mode, whereby only the MySQL server identified by the node ID node_id is permitted to access the database.

    进入单用户模式,仅允许由节点id node_id标识的mysql服务器访问数据库。

  • EXIT SINGLE USER MODE

    退出单用户模式

    Exits single user mode, enabling all SQL nodes (that is, all running mysqld processes) to access the database.

    退出单用户模式,使所有SQL节点(即,所有运行mySQLD进程)访问数据库。

    Note

    It is possible to use EXIT SINGLE USER MODE even when not in single user mode, although the command has no effect in this case.

    即使不是在单用户模式下,也可以使用exit single user mode,尽管该命令在这种情况下没有效果。

  • QUIT, EXIT

    退出,退出

    Terminates the management client.

    终止管理客户端。

    This command does not affect any nodes connected to the cluster.

    此命令不影响连接到群集的任何节点。

  • SHUTDOWN

    关闭

    Shuts down all cluster data nodes and management nodes. To exit the management client after this has been done, use EXIT or QUIT.

    关闭所有群集数据节点和管理节点。在完成此操作之后退出管理客户端,请使用退出或退出。

    This command does not shut down any SQL nodes or API nodes that are connected to the cluster.

    此命令不会关闭连接到群集的任何SQL节点或API节点。

  • CREATE NODEGROUP nodeid[, nodeid, ...]

    创建nodegroup nodeid[,nodeid,…]

    Creates a new NDB Cluster node group and causes data nodes to join it.

    创建新的ndb群集节点组并使数据节点加入该组。

    This command is used after adding new data nodes online to an NDB Cluster, and causes them to join a new node group and thus to begin participating fully in the cluster. The command takes as its sole parameter a comma-separated list of node IDs—these are the IDs of the nodes just added and started that are to join the new node group. The number of nodes must be the same as the number of nodes in each node group that is already part of the cluster (each NDB Cluster node group must have the same number of nodes). In other words, if the NDB Cluster has 2 node groups of 2 data nodes each, then the new node group must also have 2 data nodes.

    此命令在将新的数据节点联机添加到ndb集群后使用,并使它们加入新的节点组,从而开始完全参与集群。该命令将以逗号分隔的节点id列表作为唯一参数这些是刚刚添加和启动的节点的id,这些节点将加入新的节点组。节点数必须与已属于群集的每个节点组中的节点数相同(每个ndb群集节点组必须具有相同的节点数)。换句话说,如果ndb集群有2个节点组,每个节点组有2个数据节点,那么新的节点组也必须有2个数据节点。

    The node group ID of the new node group created by this command is determined automatically, and always the next highest unused node group ID in the cluster; it is not possible to set it manually.

    此命令创建的新节点组的节点组ID将自动确定,并且始终是群集中下一个最高未使用的节点组ID;无法手动设置。

    For more information, see Section 21.5.15, “Adding NDB Cluster Data Nodes Online”.

    有关更多信息,请参阅21.5.15节“在线添加ndb集群数据节点”。

  • DROP NODEGROUP nodegroup_id

    删除节点组节点组ID

    Drops the NDB Cluster node group with the given nodegroup_id.

    删除具有给定节点组ID的ndb群集节点组。

    This command can be used to drop a node group from an NDB Cluster. DROP NODEGROUP takes as its sole argument the node group ID of the node group to be dropped.

    此命令可用于从ndb集群中删除节点组。drop node group将要删除的节点组的节点组id作为其唯一参数。

    DROP NODEGROUP acts only to remove the data nodes in the effected node group from that node group. It does not stop data nodes, assign them to a different node group, or remove them from the cluster's configuration. A data node that does not belong to a node group is indicated in the output of the management client SHOW command with no nodegroup in place of the node group ID, like this (indicated using bold text):

    drop node group仅作用于从受影响的节点组中移除该节点组中的数据节点。它不会停止数据节点、将它们分配给其他节点组或从集群的配置中删除它们。不属于节点组的数据节点在management client show命令的输出中指示,没有node group代替节点组id,如下所示(使用粗体文本指示):

    id=3    @10.100.2.67  (5.7.28-ndb-7.5.16, no nodegroup)
    

    DROP NODEGROUP works only when all data nodes in the node group to be dropped are completely empty of any table data and table definitions. Since there is currently no way using ndb_mgm or the mysql client to remove all data from a specific data node or node group, this means that the command succeeds only in the two following cases:

    只有当要删除的节点组中的所有数据节点都完全没有任何表数据和表定义时,drop node group才起作用。由于目前无法使用ndb-mgm或mysql客户端从特定数据节点或节点组中删除所有数据,这意味着该命令仅在以下两种情况下成功:

    1. After issuing CREATE NODEGROUP in the ndb_mgm client, but before issuing any ALTER TABLE ... REORGANIZE PARTITION statements in the mysql client.

      在ndb-mgm客户机中发出create nodegroup之后,但在发出任何alter表之前…重新组织MySQL客户端中的分区语句。

    2. After dropping all NDBCLUSTER tables using DROP TABLE.

      在使用drop table删除所有ndbcluster表之后。

      TRUNCATE TABLE does not work for this purpose because this removes only the table data; the data nodes continue to store an NDBCLUSTER table's definition until a DROP TABLE statement is issued that causes the table metadata to be dropped.

      truncate table不能用于此目的,因为它只删除表数据;数据节点继续存储ndbcluster表的定义,直到发出导致表元数据被删除的drop table语句。

    For more information about DROP NODEGROUP, see Section 21.5.15, “Adding NDB Cluster Data Nodes Online”.

    有关drop nodegroup的更多信息,请参阅21.5.15节“在线添加ndb集群数据节点”。

  • PROMPT [prompt]

    提示[提示]

    Changes the prompt shown by ndb_mgm to the string literal prompt.

    将ndb_mgm显示的提示更改为字符串文字提示。

    prompt should not be quoted (unless you want the prompt to include the quotation marks). Unlike the case with the mysql client, special character sequences and escapes are not recognized. If called without an argument, the command resets the prompt to the default value (ndb_mgm>).

    不应引用提示(除非希望提示包含引号)。与mysql客户机的情况不同,特殊字符序列和转义无法识别。如果调用时没有参数,该命令会将提示重置为默认值(ndb_mgm>)。

    Some examples are shown here:

    以下是一些示例:

    ndb_mgm> PROMPT mgm#1:
    mgm#1: SHOW
    Cluster Configuration
    ...
    mgm#1: PROMPT mymgm >
    mymgm > PROMPT 'mymgm:'
    'mymgm:' PROMPT  mymgm:
    mymgm: PROMPT
    ndb_mgm> EXIT
    jon@valhaj:~/bin>
    

    Note that leading spaces and spaces within the prompt string are not trimmed. Trailing spaces are removed.

    请注意,提示字符串中的前导空格和空格不会被修剪。删除尾随空格。

    The PROMPT command was added in NDB 7.5.0.

    提示命令已添加到ndb 7.5.0中。

  • node_id NODELOG DEBUG {ON|OFF}

    节点id nodelog debug{on off}

    Toggles debug logging in the node log, as though the effected data node or nodes had been started with the --verbose option. NODELOG DEBUG ON starts debug logging; NODELOG DEBUG OFF switches debug logging off.

    切换节点日志中的调试日志记录,就好像受影响的一个或多个数据节点是用--verbose选项启动的一样。nodelog debug on开始调试日志记录;nodelog debug off关闭调试日志记录。

    This command was added in NDB 7.6.4.

    此命令已添加到ndb 7.6.4中。

Additional commands.  A number of other commands available in the ndb_mgm client are described elsewhere, as shown in the following list:

其他命令。ndb_-mgm客户端中可用的许多其他命令在其他地方进行了说明,如下表所示:

  • START BACKUP is used to perform an online backup in the ndb_mgm client; the ABORT BACKUP command is used to cancel a backup already in progress. For more information, see Section 21.5.3, “Online Backup of NDB Cluster”.

    启动备份用于在ndb_mgm客户端中执行联机备份;abort backup命令用于取消已在进行的备份。有关更多信息,请参阅21.5.3节“ndb群集的在线备份”。

  • The CLUSTERLOG command is used to perform various logging functions. See Section 21.5.6, “Event Reports Generated in NDB Cluster”, for more information and examples. NDB 7.6.4 adds NODELOG DEBUG to activate or deactivate debug printouts in node logs, as described previously in this section.

    clusterlog命令用于执行各种日志记录功能。有关更多信息和示例,请参阅21.5.6节,“在ndb集群中生成的事件报告”。ndb 7.6.4添加nodelog debug以激活或停用节点日志中的调试打印输出,如本节前面所述。

  • For testing and diagnostics work, the client supports a DUMP command which can be used to execute internal commands on the cluster. It should never be used in a production setting unless directed to do so by MySQL Support. For more information, see MySQL NDB Cluster Internals Manual.

    对于测试和诊断工作,客户端支持转储命令,该命令可用于在集群上执行内部命令。除非mysql支持部门指示,否则不应在生产设置中使用它。有关详细信息,请参阅mysql ndb cluster internals manual。

21.5.3 Online Backup of NDB Cluster

The next few sections describe how to prepare for and then to create an NDB Cluster backup using the functionality for this purpose found in the ndb_mgm management client. To distinguish this type of backup from a backup made using mysqldump, we sometimes refer to it as a native NDB Cluster backup. (For information about the creation of backups with mysqldump, see Section 4.5.4, “mysqldump — A Database Backup Program”.) Restoration of NDB Cluster backups is done using the ndb_restore utility provided with the NDB Cluster distribution; for information about ndb_restore and its use in restoring NDB Cluster backups, see Section 21.4.24, “ndb_restore — Restore an NDB Cluster Backup”.

接下来的几节将介绍如何使用ndb-mgm管理客户端中为此目的提供的功能来准备和创建ndb群集备份。为了将这种类型的备份与使用mysqldump进行的备份区分开来,我们有时将其称为“本机”ndb集群备份。(有关使用mysqldump创建备份的信息,请参阅4.5.4节,“mysqldump-数据库备份程序”。)使用ndb cluster distribution随附的ndb_restore实用程序完成ndb cluster备份的还原;有关ndb_restore及其在还原ndb cluster备份中的使用的信息,请参阅21.4.24节,“ndb_restore-还原ndb群集备份”。

21.5.3.1 NDB Cluster Backup Concepts

A backup is a snapshot of the database at a given time. The backup consists of three main parts:

备份是给定时间内数据库的快照。备份包括三个主要部分:

  • Metadata.  The names and definitions of all database tables

    元数据。所有数据库表的名称和定义

  • Table records.  The data actually stored in the database tables at the time that the backup was made

    表格记录。备份时实际存储在数据库表中的数据

  • Transaction log.  A sequential record telling how and when data was stored in the database

    事务日志。一种顺序记录,说明数据是如何和何时存储在数据库中的

Each of these parts is saved on all nodes participating in the backup. During backup, each node saves these three parts into three files on disk:

每个部分都保存在参与备份的所有节点上。在备份过程中,每个节点将这三个部分保存到磁盘上的三个文件中:

  • BACKUP-backup_id.node_id.ctl

    backup-backup_id.node_id.ctl备份

    A control file containing control information and metadata. Each node saves the same table definitions (for all tables in the cluster) to its own version of this file.

    包含控制信息和元数据的控制文件。每个节点都将相同的表定义(对于集群中的所有表)保存到此文件的自己版本。

  • BACKUP-backup_id-0.node_id.data

    备份-backup_id-0.node_id.data

    A data file containing the table records, which are saved on a per-fragment basis. That is, different nodes save different fragments during the backup. The file saved by each node starts with a header that states the tables to which the records belong. Following the list of records there is a footer containing a checksum for all records.

    包含表记录的数据文件,按每个片段保存。也就是说,不同的节点在备份期间保存不同的片段。每个节点保存的文件以一个标题开头,该标题说明了记录所属的表。在记录列表后面有一个页脚,其中包含所有记录的校验和。

  • BACKUP-backup_id.node_id.log

    备份-backup_id.node_id.log

    A log file containing records of committed transactions. Only transactions on tables stored in the backup are stored in the log. Nodes involved in the backup save different records because different nodes host different database fragments.

    包含已提交事务记录的日志文件。日志中只存储备份中存储的表上的事务。参与备份的节点保存不同的记录,因为不同的节点承载不同的数据库片段。

In the listing just shown, backup_id stands for the backup identifier and node_id is the unique identifier for the node creating the file.

在刚刚显示的列表中,backup_id代表备份标识符,node_id是创建文件的节点的唯一标识符。

The location of the backup files is determined by the BackupDataDir parameter.

备份文件的位置由backUpdateDir参数确定。

21.5.3.2 Using The NDB Cluster Management Client to Create a Backup

Before starting a backup, make sure that the cluster is properly configured for performing one. (See Section 21.5.3.3, “Configuration for NDB Cluster Backups”.)

启动备份之前,请确保已正确配置群集以执行备份。(请参阅第21.5.3.3节,“ndb群集备份的配置”。)

The START BACKUP command is used to create a backup:

start backup命令用于创建备份:

START BACKUP [backup_id] [wait_option] [snapshot_option]

wait_option:
WAIT {STARTED | COMPLETED} | NOWAIT

snapshot_option:
SNAPSHOTSTART | SNAPSHOTEND

Successive backups are automatically identified sequentially, so the backup_id, an integer greater than or equal to 1, is optional; if it is omitted, the next available value is used. If an existing backup_id value is used, the backup fails with the error Backup failed: file already exists. If used, the backup_id must follow START BACKUP immediately, before any other options are used.

连续备份是按顺序自动标识的,因此备份id(大于或等于1的整数)是可选的;如果省略,则使用下一个可用值。如果使用现有的BuffUpIDID值,则备份失败,错误备份失败:文件已经存在。如果使用,则在使用任何其他选项之前,备份ID必须紧跟启动备份。

The wait_option can be used to determine when control is returned to the management client after a START BACKUP command is issued, as shown in the following list:

wait_选项可用于确定发出启动备份命令后何时将控件返回到管理客户端,如下表所示:

  • If NOWAIT is specified, the management client displays a prompt immediately, as seen here:

    如果指定了nowait,管理客户端将立即显示提示,如下所示:

    ndb_mgm> START BACKUP NOWAIT
    ndb_mgm>
    

    In this case, the management client can be used even while it prints progress information from the backup process.

    在这种情况下,即使管理客户端从备份过程中打印进度信息,也可以使用它。

  • With WAIT STARTED the management client waits until the backup has started before returning control to the user, as shown here:

    启动wait后,管理客户端将等待备份启动,然后再将控制权返回给用户,如下所示:

    ndb_mgm> START BACKUP WAIT STARTED
    Waiting for started, this may take several minutes
    Node 2: Backup 3 started from node 1
    ndb_mgm>
    
  • WAIT COMPLETED causes the management client to wait until the backup process is complete before returning control to the user.

    wait completed导致管理客户端等待备份过程完成,然后再将控制权返回给用户。

WAIT COMPLETED is the default.

默认为等待完成。

A snapshot_option can be used to determine whether the backup matches the state of the cluster when START BACKUP was issued, or when it was completed. SNAPSHOTSTART causes the backup to match the state of the cluster when the backup began; SNAPSHOTEND causes the backup to reflect the state of the cluster when the backup was finished. SNAPSHOTEND is the default, and matches the behavior found in previous NDB Cluster releases.

snapshot_选项可用于确定备份是与发出启动备份时群集的状态匹配,还是与完成备份时群集的状态匹配。snapshotstart使备份与备份开始时群集的状态匹配;snapshotend使备份反映备份完成时群集的状态。snapshotend是默认值,与以前的ndb集群版本中的行为匹配。

Note

If you use the SNAPSHOTSTART option with START BACKUP, and the CompressedBackup parameter is enabled, only the data and control files are compressed—the log file is not compressed.

如果将snapshotstart选项与start backup一起使用,并且启用compressedbackup参数,则只压缩数据和控制文件日志文件不压缩。

If both a wait_option and a snapshot_option are used, they may be specified in either order. For example, all of the following commands are valid, assuming that there is no existing backup having 4 as its ID:

如果同时使用wait_选项和snapshot_选项,则可以按任意顺序指定它们。例如,所有的下列命令都是有效的,假设没有现有的备份,其ID为4:

START BACKUP WAIT STARTED SNAPSHOTSTART
START BACKUP SNAPSHOTSTART WAIT STARTED
START BACKUP 4 WAIT COMPLETED SNAPSHOTSTART
START BACKUP SNAPSHOTEND WAIT COMPLETED
START BACKUP 4 NOWAIT SNAPSHOTSTART

The procedure for creating a backup consists of the following steps:

创建备份的过程包括以下步骤:

  1. Start the management client (ndb_mgm), if it not running already.

    如果管理客户端(ndb_-mgm)尚未运行,请启动它。

  2. Execute the START BACKUP command. This produces several lines of output indicating the progress of the backup, as shown here:

    执行start backup命令。这将生成几行输出,指示备份的进度,如下所示:

    ndb_mgm> START BACKUP
    Waiting for completed, this may take several minutes
    Node 2: Backup 1 started from node 1
    Node 2: Backup 1 started from node 1 completed
     StartGCP: 177 StopGCP: 180
     #Records: 7362 #LogRecords: 0
     Data: 453648 bytes Log: 0 bytes
    ndb_mgm>
    
  3. When the backup has started the management client displays this message:

    备份启动后,管理客户端将显示以下消息:

    Backup backup_id started from node node_id
    

    backup_id is the unique identifier for this particular backup. This identifier is saved in the cluster log, if it has not been configured otherwise. node_id is the identifier of the management server that is coordinating the backup with the data nodes. At this point in the backup process the cluster has received and processed the backup request. It does not mean that the backup has finished. An example of this statement is shown here:

    backup_id是此特定备份的唯一标识符。如果未另行配置,则此标识符将保存在群集日志中。node_id是与数据节点协调备份的管理服务器的标识符。此时在备份过程中,群集已接收并处理备份请求。这并不意味着备份已经完成。此语句的示例如下所示:

    Node 2: Backup 1 started from node 1
    
  4. The management client indicates with a message like this one that the backup has started:

    管理客户端用这样的消息指示备份已启动:

    Backup backup_id started from node node_id completed
    

    As is the case for the notification that the backup has started, backup_id is the unique identifier for this particular backup, and node_id is the node ID of the management server that is coordinating the backup with the data nodes. This output is accompanied by additional information including relevant global checkpoints, the number of records backed up, and the size of the data, as shown here:

    与通知备份已启动的情况一样,backup_id是此特定备份的唯一标识符,node_id是协调备份与数据节点的管理服务器的节点id。此输出附带其他信息,包括相关的全局检查点、备份的记录数和数据大小,如下所示:

    Node 2: Backup 1 started from node 1 completed
     StartGCP: 177 StopGCP: 180
     #Records: 7362 #LogRecords: 0
     Data: 453648 bytes Log: 0 bytes
    

It is also possible to perform a backup from the system shell by invoking ndb_mgm with the -e or --execute option, as shown in this example:

还可以通过使用-e或--execute选项调用ndb-mgm从系统shell执行备份,如以下示例所示:

shell> ndb_mgm -e "START BACKUP 6 WAIT COMPLETED SNAPSHOTSTART"

When using START BACKUP in this way, you must specify the backup ID.

以这种方式使用启动备份时,必须指定备份ID。

Cluster backups are created by default in the BACKUP subdirectory of the DataDir on each data node. This can be overridden for one or more data nodes individually, or for all cluster data nodes in the config.ini file using the BackupDataDir configuration parameter. The backup files created for a backup with a given backup_id are stored in a subdirectory named BACKUP-backup_id in the backup directory.

默认情况下,在每个数据节点上的datadir的backup子目录中创建集群备份。对于一个或多个单独的数据节点,或者对于config.ini文件中的所有群集数据节点,可以使用backupdatedir配置参数覆盖此选项。为具有给定备份ID的备份创建的备份文件存储在备份目录中名为backup-backup\u id的子目录中。

Cancelling backups.  To cancel or abort a backup that is already in progress, perform the following steps:

正在取消备份。要取消或中止正在进行的备份,请执行以下步骤:

  1. Start the management client.

    启动管理客户端。

  2. Execute this command:

    执行此命令:

    ndb_mgm> ABORT BACKUP backup_id
    

    The number backup_id is the identifier of the backup that was included in the response of the management client when the backup was started (in the message Backup backup_id started from node management_node_id).

    编号backup_id是备份启动时管理客户端响应中包含的备份的标识符(在消息backup backup_id started from node management_node_id中)。

  3. The management client will acknowledge the abort request with Abort of backup backup_id ordered.

    管理客户端将确认中止请求,并按顺序中止备份ID。

    Note

    At this point, the management client has not yet received a response from the cluster data nodes to this request, and the backup has not yet actually been aborted.

    此时,管理客户端尚未收到群集数据节点对此请求的响应,而且备份尚未实际中止。

  4. After the backup has been aborted, the management client will report this fact in a manner similar to what is shown here:

    备份中止后,管理客户端将以类似于此处所示的方式报告此事实:

    Node 1: Backup 3 started from 5 has been aborted.
      Error: 1321 - Backup aborted by user request: Permanent error: User defined error
    Node 3: Backup 3 started from 5 has been aborted.
      Error: 1323 - 1323: Permanent error: Internal error
    Node 2: Backup 3 started from 5 has been aborted.
      Error: 1323 - 1323: Permanent error: Internal error
    Node 4: Backup 3 started from 5 has been aborted.
      Error: 1323 - 1323: Permanent error: Internal error
    

    In this example, we have shown sample output for a cluster with 4 data nodes, where the sequence number of the backup to be aborted is 3, and the management node to which the cluster management client is connected has the node ID 5. The first node to complete its part in aborting the backup reports that the reason for the abort was due to a request by the user. (The remaining nodes report that the backup was aborted due to an unspecified internal error.)

    在本例中,我们展示了一个具有4个数据节点的集群的示例输出,其中要中止的备份的序列号为3,而与集群管理客户端连接的管理节点的节点id为5。第一个完成中止备份部分的节点报告中止的原因是用户的请求。(其余节点报告备份由于未指定的内部错误而中止。)

    Note

    There is no guarantee that the cluster nodes respond to an ABORT BACKUP command in any particular order.

    无法保证群集节点以任何特定顺序响应中止备份命令。

    The Backup backup_id started from node management_node_id has been aborted messages mean that the backup has been terminated and that all files relating to this backup have been removed from the cluster file system.

    从node management启动的备份id node已中止消息表示备份已终止,并且与此备份相关的所有文件都已从群集文件系统中删除。

It is also possible to abort a backup in progress from a system shell using this command:

也可以使用以下命令从系统外壳程序中止正在进行的备份:

shell> ndb_mgm -e "ABORT BACKUP backup_id"
Note

If there is no backup having the ID backup_id running when an ABORT BACKUP is issued, the management client makes no response, nor is it indicated in the cluster log that an invalid abort command was sent.

如果在发出中止备份时没有运行id backup\u id的备份,则管理客户端不会做出响应,也不会在群集日志中指示发送了无效的中止命令。

21.5.3.3 Configuration for NDB Cluster Backups

Five configuration parameters are essential for backup:

备份需要五个配置参数:

  • BackupDataBufferSize

    backUpdateBufferSize

    The amount of memory used to buffer data before it is written to disk.

    在数据写入磁盘之前用来缓冲数据的内存量。

  • BackupLogBufferSize

    BackupLogBufferSize

    The amount of memory used to buffer log records before these are written to disk.

    在将日志记录写入磁盘之前用于缓冲日志记录的内存量。

  • BackupMemory

    备份内存

    The total memory allocated in a data node for backups. This should be the sum of the memory allocated for the backup data buffer and the backup log buffer.

    在数据节点中为备份分配的总内存。这应该是为备份数据缓冲区和备份日志缓冲区分配的内存的总和。

  • BackupWriteSize

    背向标准尺寸

    The default size of blocks written to disk. This applies for both the backup data buffer and the backup log buffer.

    写入磁盘的块的默认大小。这适用于备份数据缓冲区和备份日志缓冲区。

  • BackupMaxWriteSize

    备份MaxWriteSize

    The maximum size of blocks written to disk. This applies for both the backup data buffer and the backup log buffer.

    写入磁盘的块的最大大小。这适用于备份数据缓冲区和备份日志缓冲区。

More detailed information about these parameters can be found in Backup Parameters.

有关这些参数的详细信息可以在备份参数中找到。

You can also set a location for the backup files using the BackupDataDir configuration parameter. The default is FileSystemPath/BACKUP/BACKUP-backup_id.

还可以使用backupdatedir配置参数设置备份文件的位置。默认值是filesystemsynthe/backup/backup-backup\u id。

21.5.3.4 NDB Cluster Backup Troubleshooting

If an error code is returned when issuing a backup request, the most likely cause is insufficient memory or disk space. You should check that there is enough memory allocated for the backup.

如果在发出备份请求时返回错误代码,最可能的原因是内存或磁盘空间不足。您应该检查是否为备份分配了足够的内存。

Important

If you have set BackupDataBufferSize and BackupLogBufferSize and their sum is greater than 4MB, then you must also set BackupMemory as well.

如果设置了backupdateabufferSize和backuplogbufferSize,并且它们的总和大于4MB,则还必须设置backupmemory。

You should also make sure that there is sufficient space on the hard drive partition of the backup target.

还应确保备份目标的硬盘驱动器分区上有足够的空间。

NDB does not support repeatable reads, which can cause problems with the restoration process. Although the backup process is hot, restoring an NDB Cluster from backup is not a 100% hot process. This is due to the fact that, for the duration of the restore process, running transactions get nonrepeatable reads from the restored data. This means that the state of the data is inconsistent while the restore is in progress.

ndb不支持可重复读取,这可能会导致还原过程出现问题。尽管备份过程是“热”的,但从备份还原ndb群集不是100%的“热”过程。这是因为在还原过程期间,运行事务会从还原的数据中获得不可重复的读取。这意味着在还原过程中数据的状态不一致。

21.5.4 MySQL Server Usage for NDB Cluster

mysqld is the traditional MySQL server process. To be used with NDB Cluster, mysqld needs to be built with support for the NDB storage engine, as it is in the precompiled binaries available from https://dev.mysql.com/downloads/. If you build MySQL from source, you must invoke CMake with the -DWITH_NDBCLUSTER=1 option to include support for NDB.

mysqld是传统的mysql服务器进程。要与ndb集群一起使用,mysqld需要在支持ndb存储引擎的情况下构建,因为它位于https://dev.mysql.com/downloads/提供的预编译二进制文件中。如果从源代码构建MySQL,则必须使用-dwith_ndbcluster=1选项调用cmake以包括对NDB的支持。

For more information about compiling NDB Cluster from source, see Section 21.2.3.4, “Building NDB Cluster from Source on Linux”, and Section 21.2.4.2, “Compiling and Installing NDB Cluster from Source on Windows”.

有关从源代码处编译ndb群集的详细信息,请参阅第21.2.3.4节“在Linux上从源代码处编译ndb群集”,以及第21.2.4.2节“在Windows上从源代码处编译和安装ndb群集”。

(For information about mysqld options and variables, in addition to those discussed in this section, which are relevant to NDB Cluster, see Section 21.3.3.9, “MySQL Server Options and Variables for NDB Cluster”.)

(有关mysqld选项和变量的信息,除了本节讨论的与ndb集群相关的选项和变量外,请参阅21.3.3.9节“ndb集群的mysql服务器选项和变量”。)

If the mysqld binary has been built with Cluster support, the NDBCLUSTER storage engine is still disabled by default. You can use either of two possible options to enable this engine:

如果mysqld二进制文件是使用群集支持构建的,则默认情况下仍禁用ndbcluster存储引擎。您可以使用两个可能的选项之一来启用此引擎:

  • Use --ndbcluster as a startup option on the command line when starting mysqld.

    启动mysqld时,在命令行上使用--ndbcluster作为启动选项。

  • Insert a line containing ndbcluster in the [mysqld] section of your my.cnf file.

    在my.cnf文件的[mysqld]部分插入包含ndbcluster的行。

An easy way to verify that your server is running with the NDBCLUSTER storage engine enabled is to issue the SHOW ENGINES statement in the MySQL Monitor (mysql). You should see the value YES as the Support value in the row for NDBCLUSTER. If you see NO in this row or if there is no such row displayed in the output, you are not running an NDB-enabled version of MySQL. If you see DISABLED in this row, you need to enable it in either one of the two ways just described.

验证服务器是否在启用ndbcluster存储引擎的情况下运行的一个简单方法是在mysql监视器(mysql)中发出show engines语句。您应该在ndbcluster的行中看到值yes作为支持值。如果您在这一行中看到no,或者在输出中没有显示这样的行,那么您没有运行启用ndb的mysql版本。如果您在这一行中看到disabled,则需要以刚才描述的两种方式之一启用它。

To read cluster configuration data, the MySQL server requires at a minimum three pieces of information:

要读取集群配置数据,mysql服务器至少需要三条信息:

  • The MySQL server's own cluster node ID

    mysql服务器自己的集群节点id

  • The host name or IP address for the management server (MGM node)

    管理服务器(MGM节点)的主机名或IP地址

  • The number of the TCP/IP port on which it can connect to the management server

    它可以连接到管理服务器的TCP/IP端口号

Node IDs can be allocated dynamically, so it is not strictly necessary to specify them explicitly.

节点id可以动态分配,因此不必严格地显式指定它们。

The mysqld parameter ndb-connectstring is used to specify the connection string either on the command line when starting mysqld or in my.cnf. The connection string contains the host name or IP address where the management server can be found, as well as the TCP/IP port it uses.

mysqld参数ndb connectstring用于在启动mysqld时在命令行或my.cnf中指定连接字符串。连接字符串包含可以找到管理服务器的主机名或IP地址,以及它使用的TCP/IP端口。

In the following example, ndb_mgmd.mysql.com is the host where the management server resides, and the management server listens for cluster messages on port 1186:

在以下示例中,ndb_mgmd.mysql.com是管理服务器所在的主机,管理服务器侦听端口1186上的群集消息:

shell> mysqld --ndbcluster --ndb-connectstring=ndb_mgmd.mysql.com:1186

See Section 21.3.3.3, “NDB Cluster Connection Strings”, for more information on connection strings.

有关连接字符串的详细信息,请参见第21.3.3.3节“ndb集群连接字符串”。

Given this information, the MySQL server will be a full participant in the cluster. (We often refer to a mysqld process running in this manner as an SQL node.) It will be fully aware of all cluster data nodes as well as their status, and will establish connections to all data nodes. In this case, it is able to use any data node as a transaction coordinator and to read and update node data.

考虑到这些信息,mysql服务器将是集群中的完全参与者。(我们通常将以这种方式运行的mysqld进程称为sql节点。)它将完全了解所有集群数据节点及其状态,并建立到所有数据节点的连接。在这种情况下,它能够使用任何数据节点作为事务协调器,并读取和更新节点数据。

You can see in the mysql client whether a MySQL server is connected to the cluster using SHOW PROCESSLIST. If the MySQL server is connected to the cluster, and you have the PROCESS privilege, then the first row of the output is as shown here:

您可以在mysql客户机中使用show processlist查看mysql服务器是否连接到集群。如果mysql服务器连接到集群,并且您拥有process权限,那么输出的第一行如下所示:

mysql> SHOW PROCESSLIST \G
*************************** 1. row ***************************
     Id: 1
   User: system user
   Host:
     db:
Command: Daemon
   Time: 1
  State: Waiting for event from ndbcluster
   Info: NULL
Important

To participate in an NDB Cluster, the mysqld process must be started with both the options --ndbcluster and --ndb-connectstring (or their equivalents in my.cnf). If mysqld is started with only the --ndbcluster option, or if it is unable to contact the cluster, it is not possible to work with NDB tables, nor is it possible to create any new tables regardless of storage engine. The latter restriction is a safety measure intended to prevent the creation of tables having the same names as NDB tables while the SQL node is not connected to the cluster. If you wish to create tables using a different storage engine while the mysqld process is not participating in an NDB Cluster, you must restart the server without the --ndbcluster option.

要加入一个ndb集群,mysqld进程必须使用两个选项——ndb cluster和——ndb connectstring(或my.cnf中的相应选项)启动。如果mysqld仅使用--ndb cluster选项启动,或者无法与群集联系,则无法使用ndb表,也无法创建任何新表,而不管存储引擎如何。后一种限制是一种安全措施,旨在防止在sql节点未连接到集群时创建与ndb表同名的表。如果希望在mysqld进程不参与ndb集群时使用其他存储引擎创建表,则必须在不使用--ndb cluster选项的情况下重新启动服务器。

21.5.5 Performing a Rolling Restart of an NDB Cluster

This section discusses how to perform a rolling restart of an NDB Cluster installation, so called because it involves stopping and starting (or restarting) each node in turn, so that the cluster itself remains operational. This is often done as part of a rolling upgrade or rolling downgrade, where high availability of the cluster is mandatory and no downtime of the cluster as a whole is permissible. Where we refer to upgrades, the information provided here also generally applies to downgrades as well.

本节讨论如何执行ndb群集安装的滚动重新启动,之所以称为滚动重新启动,是因为它涉及依次停止和启动(或重新启动)每个节点,以便群集本身保持运行。这通常是作为滚动升级或滚动降级的一部分完成的,其中集群的高可用性是强制性的,并且不允许整个集群停机。在我们提到升级时,这里提供的信息通常也适用于降级。

There are a number of reasons why a rolling restart might be desirable. These are described in the next few paragraphs.

滚动重启可能是可取的,原因有很多。下面几段将介绍这些内容。

Configuration change.  To make a change in the cluster's configuration, such as adding an SQL node to the cluster, or setting a configuration parameter to a new value.

配置更改。更改群集的配置,例如向群集添加SQL节点,或将配置参数设置为新值。

NDB Cluster software upgrade or downgrade.  To upgrade the cluster to a newer version of the NDB Cluster software (or to downgrade it to an older version). This is usually referred to as a rolling upgrade (or rolling downgrade, when reverting to an older version of NDB Cluster).

ndb群集软件升级或降级。将群集升级到新版本的ndb群集软件(或将其降级到旧版本)。这通常称为“滚动升级”(或“滚动降级”,当还原到旧版本的ndb集群时)。

Change on node host.  To make changes in the hardware or operating system on which one or more NDB Cluster node processes are running.

在节点主机上更改。在运行一个或多个ndb群集节点进程的硬件或操作系统中进行更改。

System reset (cluster reset).  To reset the cluster because it has reached an undesirable state. In such cases it is often desirable to reload the data and metadata of one or more data nodes. This can be done in any of three ways:

系统重置(群集重置)。重置群集,因为它已达到不希望的状态。在这种情况下,通常需要重新加载一个或多个数据节点的数据和元数据。这可以通过以下三种方式之一实现:

  • Start each data node process (ndbd or possibly ndbmtd) with the --initial option, which forces the data node to clear its file system and to reload all NDB Cluster data and metadata from the other data nodes.

    使用--initial选项启动每个数据节点进程(ndbd或可能是ndbmtd),这将强制数据节点清除其文件系统,并从其他数据节点重新加载所有ndb集群数据和元数据。

  • Create a backup using the ndb_mgm client START BACKUP command prior to performing the restart. Following the upgrade, restore the node or nodes using ndb_restore.

    在执行重新启动之前,使用ndb-mgm client start backup命令创建备份。升级后,使用ndb_restore还原一个或多个节点。

    See Section 21.5.3, “Online Backup of NDB Cluster”, and Section 21.4.24, “ndb_restore — Restore an NDB Cluster Backup”, for more information.

    有关详细信息,请参阅第21.5.3节“ndb群集的联机备份”和第21.4.24节“ndb群集还原-还原ndb群集备份”。

  • Use mysqldump to create a backup prior to the upgrade; afterward, restore the dump using LOAD DATA.

    在升级之前,使用mysqldump创建备份;之后,使用加载数据还原转储。

Resource Recovery.  To free memory previously allocated to a table by successive INSERT and DELETE operations, for re-use by other NDB Cluster tables.

资源回收。释放以前通过连续的插入和删除操作分配给表的内存,供其他ndb集群表重用。

The process for performing a rolling restart may be generalized as follows:

执行滚动重启的过程可概括如下:

  1. Stop all cluster management nodes (ndb_mgmd processes), reconfigure them, then restart them. (See Rolling restarts with multiple management servers.)

    停止所有群集管理节点(ndb_mgmd进程),重新配置它们,然后重新启动它们。(请参阅使用多个管理服务器滚动重新启动。)

  2. Stop, reconfigure, then restart each cluster data node (ndbd process) in turn.

    停止,重新配置,然后依次重新启动每个群集数据节点(ndbd进程)。

    Some node configuration parameters can be updated by issuing RESTART for each of the data nodes in the ndb_mgm client following the previous step; others require that the data node be stopped completely using a shell command (such as kill on most Unix systems) or the management client STOP command, then started again from a system shell by invoking the ndbd or ndbmtd executable as appropriate.

    一些节点配置参数可以通过在上一步之后对ndb_mgm客户端中的每个数据节点发出restart命令来更新;另一些则要求使用shell命令(如大多数unix系统上的kill命令)或management client stop命令完全停止数据节点,然后根据需要调用ndbd或ndbmtd可执行文件,从系统外壳重新启动。

    Note

    On Windows, you can also use SC STOP and SC START commands, NET STOP and NET START commands, or the Windows Service Manager to stop and start nodes which have been installed as Windows services (see Section 21.2.4.4, “Installing NDB Cluster Processes as Windows Services”).

    在windows上,您还可以使用sc stop和sc start命令、net stop和net start命令或windows服务管理器来停止和启动已安装为windows服务的节点(请参阅第21.2.4.4节“将ndb群集进程安装为windows服务”)。

    The type of restart required is indicated in the documentation for each node configuration parameter. See Section 21.3.3, “NDB Cluster Configuration Files”.

    文档中为每个节点配置参数指明了所需的重新启动类型。参见第21.3.3节“ndb集群配置文件”。

  3. Stop, reconfigure, then restart each cluster SQL node (mysqld process) in turn.

    停止,重新配置,然后依次重新启动每个集群sql节点(mysqld进程)。

NDB Cluster supports a somewhat flexible order for upgrading nodes. When upgrading an NDB Cluster, you may upgrade API nodes (including SQL nodes) before upgrading the management nodes, data nodes, or both. In other words, you are permitted to upgrade the API and SQL nodes in any order. This is subject to the following provisions:

NDB集群支持一种稍微灵活的升级节点的顺序。升级ndb集群时,可以先升级api节点(包括sql节点),然后再升级管理节点和/或数据节点。换句话说,您可以按照任何顺序升级api和sql节点。须遵守以下规定:

  • This functionality is intended for use as part of an online upgrade only. A mix of node binaries from different NDB Cluster releases is neither intended nor supported for continuous, long-term use in a production setting.

    此功能仅作为联机升级的一部分使用。不同ndb集群版本的节点二进制文件的混合既不打算也不支持在生产环境中连续、长期使用。

  • All management nodes must be upgraded before any data nodes are upgraded. This remains true regardless of the order in which you upgrade the cluster's API and SQL nodes.

    在升级任何数据节点之前,必须先升级所有管理节点。无论升级集群的api和sql节点的顺序如何,这都保持不变。

  • Features specific to the new version must not be used until all management nodes and data nodes have been upgraded.

    在升级所有管理节点和数据节点之前,不得使用特定于“新”版本的功能。

    This also applies to any MySQL Server version change that may apply, in addition to the NDB engine version change, so do not forget to take this into account when planning the upgrade. (This is true for online upgrades of NDB Cluster in general.)

    这也适用于任何可能应用的mysql服务器版本更改,除了ndb引擎版本更改之外,因此在计划升级时不要忘记考虑这一点。(一般情况下,对于ndb集群的在线升级也是如此。)

See also Bug #48528 and Bug #49163.

另请参见错误48528和错误49163。

Note

It is not possible for any API node to perform schema operations (such as data definition statements) during a node restart.

在节点重新启动期间,任何api节点都无法执行架构操作(如数据定义语句)。

Rolling restarts with multiple management servers.  When performing a rolling restart of an NDB Cluster with multiple management nodes, you should keep in mind that ndb_mgmd checks to see if any other management node is running, and, if so, tries to use that node's configuration data. To keep this from occurring, and to force ndb_mgmd to reread its configuration file, perform the following steps:

使用多个管理服务器重新启动滚动。在对具有多个管理节点的ndb群集执行滚动重新启动时,应记住ndb_mgmd会检查是否有任何其他管理节点正在运行,如果正在运行,则会尝试使用该节点的配置数据。要防止这种情况发生,并强制ndb_mgmd重新读取其配置文件,请执行以下步骤:

  1. Stop all NDB Cluster ndb_mgmd processes.

    停止所有ndb cluster ndb_mgmd进程。

  2. Update all config.ini files.

    更新所有config.ini文件。

  3. Start a single ndb_mgmd with --reload, --initial, or both options as desired.

    根据需要,使用--reload、--initial或两个选项启动单个ndb_mgmd。

  4. If you started the first ndb_mgmd with the --initial option, you must also start any remaining ndb_mgmd processes using --initial.

    如果使用--initial选项启动第一个ndb_mgmd,则还必须使用--initial启动任何剩余的ndb_mgmd进程。

    Regardless of any other options used when starting the first ndb_mgmd, you should not start any remaining ndb_mgmd processes after the first one using --reload.

    不管启动第一个ndb-mgmd时使用了什么其他选项,都不应该在第一个ndb-mgmd进程使用--reload之后启动任何剩余的ndb-mgmd进程。

  5. Complete the rolling restarts of the data nodes and API nodes as normal.

    正常完成数据节点和api节点的滚动重新启动。

When performing a rolling restart to update the cluster's configuration, you can use the config_generation column of the ndbinfo.nodes table to keep track of which data nodes have been successfully restarted with the new configuration. See Section 21.5.10.28, “The ndbinfo nodes Table”.

执行滚动重新启动以更新群集的配置时,可以使用ndbinfo.nodes表的config_generation列来跟踪哪些数据节点已使用新配置成功重新启动。见第21.5.10.28节,“ndbinfo节点表”。

21.5.6 Event Reports Generated in NDB Cluster

In this section, we discuss the types of event logs provided by NDB Cluster, and the types of events that are logged.

在本节中,我们将讨论ndb集群提供的事件日志类型,以及记录的事件类型。

NDB Cluster provides two types of event log:

ndb集群提供两种类型的事件日志:

  • The cluster log, which includes events generated by all cluster nodes. The cluster log is the log recommended for most uses because it provides logging information for an entire cluster in a single location.

    群集日志,其中包括所有群集节点生成的事件。群集日志是推荐用于大多数用途的日志,因为它在单个位置为整个群集提供日志信息。

    By default, the cluster log is saved to a file named ndb_node_id_cluster.log, (where node_id is the node ID of the management server) in the management server's DataDir.

    默认情况下,群集日志保存到管理服务器datadir中名为ndb_node_id_cluster.log的文件(其中node_id是管理服务器的节点id)。

    Cluster logging information can also be sent to stdout or a syslog facility in addition to or instead of being saved to a file, as determined by the values set for the DataDir and LogDestination configuration parameters. See Section 21.3.3.5, “Defining an NDB Cluster Management Server”, for more information about these parameters.

    集群日志记录信息也可以发送到stdout或syslog工具,添加或不保存到文件中,这取决于为datadir和logdestination配置参数设置的值。有关这些参数的更多信息,请参阅第21.3.3.5节“定义ndb群集管理服务器”。

  • Node logs are local to each node.

    节点日志是每个节点的本地日志。

    Output generated by node event logging is written to the file ndb_node_id_out.log (where node_id is the node's node ID) in the node's DataDir. Node event logs are generated for both management nodes and data nodes.

    节点事件日志生成的输出将写入节点datadir中的文件ndb_node_id_out.log(其中node_id是节点的节点id)。为管理节点和数据节点生成节点事件日志。

    Node logs are intended to be used only during application development, or for debugging application code.

    节点日志仅在应用程序开发期间使用,或用于调试应用程序代码。

Both types of event logs can be set to log different subsets of events.

两种类型的事件日志都可以设置为记录不同的事件子集。

Each reportable event can be distinguished according to three different criteria:

每个可报告事件可根据三个不同的标准进行区分:

  • Category: This can be any one of the following values: STARTUP, SHUTDOWN, STATISTICS, CHECKPOINT, NODERESTART, CONNECTION, ERROR, or INFO.

    类别:可以是以下任何一个值:启动、关闭、统计信息、检查点、节点重新启动、连接、错误或信息。

  • Priority: This is represented by one of the numbers from 0 to 15 inclusive, where 0 indicates most important and 15 least important.

    优先级:这由0到15之间的一个数字表示,其中0表示“最重要”和15表示“最不重要”。

  • Severity Level: This can be any one of the following values: ALERT, CRITICAL, ERROR, WARNING, INFO, or DEBUG.

    严重性级别:可以是以下任何一个值:警报、严重、错误、警告、信息或调试。

Both the cluster log and the node log can be filtered on these properties.

可以根据这些属性筛选群集日志和节点日志。

The format used in the cluster log is as shown here:

群集日志中使用的格式如下所示:

2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 1: Data usage is 2%(60 32K pages of total 2560)
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 1: Index usage is 1%(24 8K pages of total 2336)
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 1: Resource 0 min: 0 max: 639 curr: 0
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 2: Data usage is 2%(76 32K pages of total 2560)
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 2: Index usage is 1%(24 8K pages of total 2336)
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 2: Resource 0 min: 0 max: 639 curr: 0
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 3: Data usage is 2%(58 32K pages of total 2560)
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 3: Index usage is 1%(25 8K pages of total 2336)
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 3: Resource 0 min: 0 max: 639 curr: 0
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 4: Data usage is 2%(74 32K pages of total 2560)
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 4: Index usage is 1%(25 8K pages of total 2336)
2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 4: Resource 0 min: 0 max: 639 curr: 0
2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 4: Node 9 Connected
2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 1: Node 9 Connected
2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 1: Node 9: API 5.7.28-ndb-7.5.16
2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 2: Node 9 Connected
2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 2: Node 9: API 5.7.28-ndb-7.5.16
2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 3: Node 9 Connected
2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 3: Node 9: API 5.7.28-ndb-7.5.16
2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 4: Node 9: API 5.7.28-ndb-7.5.16
2007-01-26 19:59:22 [MgmSrvr] ALERT    -- Node 2: Node 7 Disconnected
2007-01-26 19:59:22 [MgmSrvr] ALERT    -- Node 2: Node 7 Disconnected

Each line in the cluster log contains the following information:

群集日志中的每一行都包含以下信息:

  • A timestamp in YYYY-MM-DD HH:MM:SS format.

    yyyy-mm-dd hh:mm:ss格式的时间戳。

  • The type of node which is performing the logging. In the cluster log, this is always [MgmSrvr].

    正在执行日志记录的节点的类型。在集群日志中,始终是[mgmsrvr]。

  • The severity of the event.

    事件的严重性。

  • The ID of the node reporting the event.

    报告事件的节点的ID。

  • A description of the event. The most common types of events to appear in the log are connections and disconnections between different nodes in the cluster, and when checkpoints occur. In some cases, the description may contain status information.

    对事件的描述。日志中最常见的事件类型是集群中不同节点之间的连接和断开连接,以及检查点出现时的连接和断开连接。在某些情况下,描述可能包含状态信息。

21.5.6.1 NDB Cluster Logging Management Commands

ndb_mgm supports a number of management commands related to the cluster log and node logs. In the listing that follows, node_id denotes either a storage node ID or the keyword ALL, which indicates that the command should be applied to all of the cluster's data nodes.

ndb-mgm支持许多与集群日志和节点日志相关的管理命令。在下面的列表中,node_id表示存储节点id或关键字all,这表示该命令应应用于集群的所有数据节点。

  • CLUSTERLOG ON

    群集登录

    Turns the cluster log on.

    打开群集登录。

  • CLUSTERLOG OFF

    群集注销

    Turns the cluster log off.

    关闭群集日志。

  • CLUSTERLOG INFO

    群集日志信息

    Provides information about cluster log settings.

    提供有关群集日志设置的信息。

  • node_id CLUSTERLOG category=threshold

    node_id clusterlog category=阈值

    Logs category events with priority less than or equal to threshold in the cluster log.

    在群集日志中记录优先级小于或等于阈值的类别事件。

  • CLUSTERLOG FILTER severity_level

    群集日志筛选器严重性级别

    Toggles cluster logging of events of the specified severity_level.

    切换指定严重性级别的事件的群集日志记录。

The following table describes the default setting (for all data nodes) of the cluster log category threshold. If an event has a priority with a value lower than or equal to the priority threshold, it is reported in the cluster log.

下表描述了群集日志类别阈值的默认设置(对于所有数据节点)。如果事件的优先级值小于或等于优先级阈值,则会在群集日志中报告该事件。

Note

Events are reported per data node, and that the threshold can be set to different values on different nodes.

每个数据节点都会报告事件,并且阈值可以设置为不同节点上的不同值。

Table 21.343 Cluster log categories, with default threshold setting

表21.343群集日志类别,默认阈值设置

Category Default threshold (All data nodes)
STARTUP 7
SHUTDOWN 7
STATISTICS 7
CHECKPOINT 7
NODERESTART 7
CONNECTION 7
ERROR 15
INFO 7

The STATISTICS category can provide a great deal of useful data. See Section 21.5.6.3, “Using CLUSTERLOG STATISTICS in the NDB Cluster Management Client”, for more information.

统计类别可以提供大量有用的数据。有关更多信息,请参阅21.5.6.3节,“在ndb群集管理客户端中使用群集日志统计信息”。

Thresholds are used to filter events within each category. For example, a STARTUP event with a priority of 3 is not logged unless the threshold for STARTUP is set to 3 or higher. Only events with priority 3 or lower are sent if the threshold is 3.

阈值用于筛选每个类别中的事件。例如,除非启动阈值设置为3或更高,否则不会记录优先级为3的启动事件。如果阈值为3,则仅发送优先级为3或更低的事件。

The following table shows the event severity levels.

下表显示了事件严重性级别。

Note

These correspond to Unix syslog levels, except for LOG_EMERG and LOG_NOTICE, which are not used or mapped.

这些级别与unix系统日志级别相对应,除了没有使用或映射的log emerg和log notice。

Table 21.344 Event severity levels

表21.344事件严重程度

Severity Level Value Severity Description
1 ALERT A condition that should be corrected immediately, such as a corrupted system database
2 CRITICAL Critical conditions, such as device errors or insufficient resources
3 ERROR Conditions that should be corrected, such as configuration errors
4 WARNING Conditions that are not errors, but that might require special handling
5 INFO Informational messages
6 DEBUG Debugging messages used for NDBCLUSTER development

Event severity levels can be turned on or off (using CLUSTERLOG FILTER—see above). If a severity level is turned on, then all events with a priority less than or equal to the category thresholds are logged. If the severity level is turned off then no events belonging to that severity level are logged.

可以打开或关闭事件严重性级别(使用ClusterLog筛选器,请参阅上文)。如果启用了严重性级别,则会记录优先级小于或等于类别阈值的所有事件。如果关闭了严重性级别,则不会记录属于该严重性级别的事件。

Important

Cluster log levels are set on a per ndb_mgmd, per subscriber basis. This means that, in an NDB Cluster with multiple management servers, using a CLUSTERLOG command in an instance of ndb_mgm connected to one management server affects only logs generated by that management server but not by any of the others. This also means that, should one of the management servers be restarted, only logs generated by that management server are affected by the resetting of log levels caused by the restart.

群集日志级别按每个ndb_mgmd和每个订户设置。这意味着,在具有多个管理服务器的ndb集群中,在连接到一个管理服务器的ndb-mgm实例中使用clusterlog命令只影响该管理服务器生成的日志,而不影响任何其他管理服务器生成的日志。这也意味着,如果其中一个管理服务器重新启动,则只有该管理服务器生成的日志受重新启动导致的日志级别重置的影响。

21.5.6.2 NDB Cluster Log Events

An event report reported in the event logs has the following format:

事件日志中报告的事件报告具有以下格式:

datetime [string] severity -- message

For example:

例如:

09:19:30 2005-07-24 [NDB] INFO -- Node 4 Start phase 4 completed

This section discusses all reportable events, ordered by category and severity level within each category.

本节讨论所有可报告事件,按类别和每个类别中的严重性级别排序。

In the event descriptions, GCP and LCP mean Global Checkpoint and Local Checkpoint, respectively.

在事件描述中,gcp和lcp分别表示“全局检查点”和“本地检查点”。

CONNECTION Events

These events are associated with connections between Cluster nodes.

这些事件与群集节点之间的连接关联。

Table 21.345 Events associated with connections between cluster nodes

表21.345与群集节点之间的连接关联的事件

Event Priority Severity Level Description
Connected 8 INFO Data nodes connected
Disconnected 8 ALERT Data nodes disconnected
CommunicationClosed 8 INFO SQL node or data node connection closed
CommunicationOpened 8 INFO SQL node or data node connection open
ConnectedApiVersion 8 INFO Connection using API version

CHECKPOINT Events

The logging messages shown here are associated with checkpoints.

此处显示的日志消息与检查点关联。

Table 21.346 Events associated with checkpoints

表21.346与检查点相关的事件

Event Priority Severity Level Description
GlobalCheckpointStarted 9 INFO Start of GCP: REDO log is written to disk
GlobalCheckpointCompleted 10 INFO GCP finished
LocalCheckpointStarted 7 INFO Start of LCP: data written to disk
LocalCheckpointCompleted 7 INFO LCP completed normally
LCPStoppedInCalcKeepGci 0 ALERT LCP stopped
LCPFragmentCompleted 11 INFO LCP on a fragment has been completed
UndoLogBlocked 7 INFO UNDO logging blocked; buffer near overflow
RedoStatus 7 INFO Redo status

STARTUP Events

The following events are generated in response to the startup of a node or of the cluster and of its success or failure. They also provide information relating to the progress of the startup process, including information concerning logging activities.

以下事件是响应节点或群集的启动及其成功或失败而生成的。它们还提供有关启动过程进度的信息,包括有关日志记录活动的信息。

Table 21.347 Events relating to the startup of a node or cluster

表21.347与节点或集群启动相关的事件

Event Priority Severity Level Description
NDBStartStarted 1 INFO Data node start phases initiated (all nodes starting)
NDBStartCompleted 1 INFO Start phases completed, all data nodes
STTORRYRecieved 15 INFO Blocks received after completion of restart
StartPhaseCompleted 4 INFO Data node start phase X completed
CM_REGCONF 3 INFO Node has been successfully included into the cluster; shows the node, managing node, and dynamic ID
CM_REGREF 8 INFO Node has been refused for inclusion in the cluster; cannot be included in cluster due to misconfiguration, inability to establish communication, or other problem
FIND_NEIGHBOURS 8 INFO Shows neighboring data nodes
NDBStopStarted 1 INFO Data node shutdown initiated
NDBStopCompleted 1 INFO Data node shutdown complete
NDBStopForced 1 ALERT Forced shutdown of data node
NDBStopAborted 1 INFO Unable to shut down data node normally
StartREDOLog 4 INFO New redo log started; GCI keep X, newest restorable GCI Y
StartLog 10 INFO New log started; log part X, start MB Y, stop MB Z
UNDORecordsExecuted 15 INFO Undo records executed
StartReport 4 INFO Report started
LogFileInitStatus 7 INFO Log file initialization status
LogFileInitCompStatus 7 INFO Log file completion status
StartReadLCP 10 INFO Start read for local checkpoint
ReadLCPComplete 10 INFO Read for local checkpoint completed
RunRedo 8 INFO Running the redo log
RebuildIndex 10 INFO Rebuilding indexes

NODERESTART Events

The following events are generated when restarting a node and relate to the success or failure of the node restart process.

重新启动节点时会生成以下事件,这些事件与节点重新启动过程的成功或失败有关。

Table 21.348 Events relating to restarting a node

表21.348与重新启动节点相关的事件

Event Priority Severity Level Description
NR_CopyDict 7 INFO Completed copying of dictionary information
NR_CopyDistr 7 INFO Completed copying distribution information
NR_CopyFragsStarted 7 INFO Starting to copy fragments
NR_CopyFragDone 10 INFO Completed copying a fragment
NR_CopyFragsCompleted 7 INFO Completed copying all fragments
NodeFailCompleted 8 ALERT Node failure phase completed
NODE_FAILREP 8 ALERT Reports that a node has failed
ArbitState 6 INFO Report whether an arbitrator is found or not; there are seven different possible outcomes when seeking an arbitrator, listed here:
  • Management server restarts arbitration thread [state=X]

    管理服务器重新启动仲裁线程[state=x]

  • Prepare arbitrator node X [ticket=Y]

    准备仲裁器节点x[票证=y]

  • Receive arbitrator node X [ticket=Y]

    接收仲裁器节点x[票证=y]

  • Started arbitrator node X [ticket=Y]

    已启动仲裁器节点x[ticket=y]

  • Lost arbitrator node X - process failure [state=Y]

    丢失仲裁器节点x-进程失败[状态=y]

  • Lost arbitrator node X - process exit [state=Y]

    丢失仲裁员节点X -进程出口[状态=y]

  • Lost arbitrator node X <error msg> [state=Y]

    丢失仲裁器节点x[state=y]

ArbitResult 2 ALERT Report arbitrator results; there are eight different possible results for arbitration attempts, listed here:
  • Arbitration check failed: less than 1/2 nodes left

    仲裁检查失败:剩余不足1/2个节点

  • Arbitration check succeeded: node group majority

    仲裁检查成功:节点组多数

  • Arbitration check failed: missing node group

    仲裁检查失败:缺少节点组

  • Network partitioning: arbitration required

    网络分区:需要仲裁

  • Arbitration succeeded: affirmative response from node X

    仲裁成功:节点x的肯定响应

  • Arbitration failed: negative response from node X

    仲裁失败:来自节点x的否定响应

  • Network partitioning: no arbitrator available

    网络分区:没有可用的仲裁器

  • Network partitioning: no arbitrator configured

    网络分区:未配置仲裁器

GCP_TakeoverStarted 7 INFO GCP takeover started
GCP_TakeoverCompleted 7 INFO GCP takeover complete
LCP_TakeoverStarted 7 INFO LCP takeover started
LCP_TakeoverCompleted 7 INFO LCP takeover complete (state = X)
ConnectCheckStarted 6 INFO Connection check started
ConnectCheckCompleted 6 INFO Connection check completed
NodeFailRejected 6 ALERT Node failure phase failed

STATISTICS Events

The following events are of a statistical nature. They provide information such as numbers of transactions and other operations, amount of data sent or received by individual nodes, and memory usage.

下列事件具有统计性质。它们提供诸如事务数和其他操作、单个节点发送或接收的数据量以及内存使用量等信息。

Table 21.349 Events of a statistical nature

表21.349统计性质的事件

Event Priority Severity Level Description
TransReportCounters 8 INFO Report transaction statistics, including numbers of transactions, commits, reads, simple reads, writes, concurrent operations, attribute information, and aborts
OperationReportCounters 8 INFO Number of operations
TableCreated 7 INFO Report number of tables created
JobStatistic 9 INFO Mean internal job scheduling statistics
ThreadConfigLoop 9 INFO Number of thread configuration loops
SendBytesStatistic 9 INFO Mean number of bytes sent to node X
ReceiveBytesStatistic 9 INFO Mean number of bytes received from node X
MemoryUsage 5 INFO Data and index memory usage (80%, 90%, and 100%)
MTSignalStatistics 9 INFO Multithreaded signals

SCHEMA Events

These events relate to NDB Cluster schema operations.

这些事件与ndb群集架构操作相关。

Table 21.350 Events relating to NDB Cluster schema operations

表21.350与ndb集群模式操作相关的事件

Event Priority Severity Level Description
CreateSchemaObject 8 INFO Schema objected created
AlterSchemaObject 8 INFO Schema object updated
DropSchemaObject 8 INFO Schema object dropped

ERROR Events

These events relate to Cluster errors and warnings. The presence of one or more of these generally indicates that a major malfunction or failure has occurred.

这些事件与群集错误和警告有关。这些故障中的一个或多个通常表示发生了重大故障。

Table 21.351 Events relating to cluster errors and warnings

表21.351与群集错误和警告相关的事件

Event Priority Severity Level Description
TransporterError 2 ERROR Transporter error
TransporterWarning 8 WARNING Transporter warning
MissedHeartbeat 8 WARNING Node X missed heartbeat number Y
DeadDueToHeartbeat 8 ALERT Node X declared dead due to missed heartbeat
WarningEvent 2 WARNING General warning event
SubscriptionStatus 4 WARNING Change in subscription status

INFO Events

These events provide general information about the state of the cluster and activities associated with Cluster maintenance, such as logging and heartbeat transmission.

这些事件提供有关群集状态和与群集维护相关联的活动(如日志记录和心跳传输)的一般信息。

Table 21.352 Information events

表21.352信息事件

Event Priority Severity Level Description
SentHeartbeat 12 INFO Sent heartbeat
CreateLogBytes 11 INFO Create log: Log part, log file, size in MB
InfoEvent 2 INFO General informational event
EventBufferStatus 7 INFO Event buffer status
EventBufferStatus2 7 INFO Improved event buffer status information; added in NDB 7.5.1

Note

SentHeartbeat events are available only if NDB Cluster was compiled with VM_TRACE enabled.

只有在启用了vm_跟踪的情况下编译ndb集群时,senthheartbeat事件才可用。

SINGLEUSER Events

These events are associated with entering and exiting single user mode.

这些事件与进入和退出单用户模式相关。

Table 21.353 Events relating to single user mode

表21.353与单用户模式有关的事件

Event Priority Severity Level Description
SingleUser 7 INFO Entering or exiting single user mode

BACKUP Events

These events provide information about backups being created or restored.

这些事件提供有关正在创建或还原的备份的信息。

Table 21.354 Backup events

表21.354备份事件

Event Priority Severity Level Description
BackupStarted 7 INFO Backup started
BackupStatus 7 INFO Backup status
BackupCompleted 7 INFO Backup completed
BackupFailedToStart 7 ALERT Backup failed to start
BackupAborted 7 ALERT Backup aborted by user
RestoreStarted 7 INFO Started restoring from backup
RestoreMetaData 7 INFO Restoring metadata
RestoreData 7 INFO Restoring data
RestoreLog 7 INFO Restoring log files
RestoreCompleted 7 INFO Completed restoring from backup
SavedEvent 7 INFO Event saved

21.5.6.3 Using CLUSTERLOG STATISTICS in the NDB Cluster Management Client

The NDB management client's CLUSTERLOG STATISTICS command can provide a number of useful statistics in its output. Counters providing information about the state of the cluster are updated at 5-second reporting intervals by the transaction coordinator (TC) and the local query handler (LQH), and written to the cluster log.

ndb管理客户机的clusterlog statistics命令可以在其输出中提供许多有用的统计信息。事务协调器(tc)和本地查询处理程序(lqh)以5秒的报告间隔更新提供集群状态信息的计数器,并将其写入集群日志。

Transaction coordinator statistics.  Each transaction has one transaction coordinator, which is chosen by one of the following methods:

事务协调器统计。每个事务都有一个事务协调器,可通过以下方法之一选择:

  • In a round-robin fashion

    以循环的方式

  • By communication proximity

    通过通信接近

  • By supplying a data placement hint when the transaction is started

    通过在事务启动时提供数据放置提示

Note

You can determine which TC selection method is used for transactions started from a given SQL node using the ndb_optimized_node_selection system variable.

您可以使用ndb_optimized_node_selection系统变量确定哪个tc选择方法用于从给定sql节点启动的事务。

All operations within the same transaction use the same transaction coordinator, which reports the following statistics:

同一事务中的所有操作都使用同一事务协调器,它报告以下统计信息:

  • Trans count.  This is the number transactions started in the last interval using this TC as the transaction coordinator. Any of these transactions may have committed, have been aborted, or remain uncommitted at the end of the reporting interval.

    传输计数。这是使用此tc作为事务协调器在最后一个间隔内启动的事务数。这些事务中的任何一个都可能已提交、已中止或在报告间隔结束时保持未提交状态。

    Note

    Transactions do not migrate between TCs.

    事务不会在tcs之间迁移。

  • Commit count.  This is the number of transactions using this TC as the transaction coordinator that were committed in the last reporting interval. Because some transactions committed in this reporting interval may have started in a previous reporting interval, it is possible for Commit count to be greater than Trans count.

    提交计数。这是在上一个报告间隔中使用此TC作为事务协调器提交的事务数。由于在此报告间隔中提交的某些事务可能已在上一个报告间隔中启动,因此提交计数可能大于事务计数。

  • Read count.  This is the number of primary key read operations using this TC as the transaction coordinator that were started in the last reporting interval, including simple reads. This count also includes reads performed as part of unique index operations. A unique index read operation generates 2 primary key read operations—1 for the hidden unique index table, and 1 for the table on which the read takes place.

    读取计数。这是使用此tc作为事务协调器的主键读取操作在上一个报告间隔中启动的数目,包括简单读取。此计数还包括作为唯一索引操作的一部分执行的读取。唯一索引读取操作生成2个主键读取操作—1个用于隐藏的唯一索引表,1个用于进行读取的表。

  • Simple read count.  This is the number of simple read operations using this TC as the transaction coordinator that were started in the last reporting interval.

    简单读取计数。这是在上一个报告间隔中启动的使用此tc作为事务协调器的简单读取操作数。

  • Write count.  This is the number of primary key write operations using this TC as the transaction coordinator that were started in the last reporting interval. This includes all inserts, updates, writes and deletes, as well as writes performed as part of unique index operations.

    写计数。这是在上一个报告间隔中使用此TC作为事务协调器启动的主键写入操作数。这包括所有插入、更新、写入和删除,以及作为唯一索引操作的一部分执行的写入。

    Note

    A unique index update operation can generate multiple PK read and write operations on the index table and on the base table.

    唯一的索引更新操作可以在索引表和基表上生成多个pk读写操作。

  • AttrInfoCount.  This is the number of 32-bit data words received in the last reporting interval for primary key operations using this TC as the transaction coordinator. For reads, this is proportional to the number of columns requested. For inserts and updates, this is proportional to the number of columns written, and the size of their data. For delete operations, this is usually zero.

    属性计数。这是在使用此TC作为事务协调器的主键操作的上一个报告间隔中接收的32位数据字数。对于读取,这与请求的列数成比例。对于插入和更新,这与写入的列数及其数据大小成正比。对于删除操作,这通常为零。

    Unique index operations generate multiple PK operations and so increase this count. However, data words sent to describe the PK operation itself, and the key information sent, are not counted here. Attribute information sent to describe columns to read for scans, or to describe ScanFilters, is also not counted in AttrInfoCount.

    唯一索引操作生成多个pk操作,因此增加此计数。但是,这里不计算为描述pk操作本身而发送的数据字和发送的密钥信息。属性信息发送到描述要读取扫描的列或描述扫描筛选器,也不计入attrinfocount。

  • Concurrent Operations.  This is the number of primary key or scan operations using this TC as the transaction coordinator that were started during the last reporting interval but that were not completed. Operations increment this counter when they are started and decrement it when they are completed; this occurs after the transaction commits. Dirty reads and writes—as well as failed operations—decrement this counter.

    并行操作。这是使用此TC作为事务协调器的主键或扫描操作的数目,这些操作在上一个报告间隔期间启动,但尚未完成。操作在启动时递增此计数器,在完成时递减;这在事务提交后发生。脏读写以及失败的操作都会减少此计数器。

    The maximum value that Concurrent Operations can have is the maximum number of operations that a TC block can support; currently, this is (2 * MaxNoOfConcurrentOperations) + 16 + MaxNoOfConcurrentTransactions. (For more information about these configuration parameters, see the Transaction Parameters section of Section 21.3.3.6, “Defining NDB Cluster Data Nodes”.)

    并发操作的最大值是TC块可以支持的最大操作数;当前,这是(2×Max NoFunCurrand操作)+ 16 + Max NoFunCurrTrices事务。(有关这些配置参数的更多信息,请参阅21.3.3.6节“定义ndb集群数据节点”中的事务参数部分。)

  • Abort count.  This is the number of transactions using this TC as the transaction coordinator that were aborted during the last reporting interval. Because some transactions that were aborted in the last reporting interval may have started in a previous reporting interval, Abort count can sometimes be greater than Trans count.

    中止计数。这是在上一个报告间隔期间中止的使用此TC作为事务协调器的事务数。由于在上一个报告间隔中中止的某些事务可能已在上一个报告间隔中启动,因此中止计数有时可能大于事务计数。

  • Scans.  This is the number of table scans using this TC as the transaction coordinator that were started during the last reporting interval. This does not include range scans (that is, ordered index scans).

    扫描。这是在上一个报告间隔期间使用此TC作为事务协调器启动的表扫描数。这不包括范围扫描(即顺序索引扫描)。

  • Range scans.  This is the number of ordered index scans using this TC as the transaction coordinator that were started in the last reporting interval.

    范围扫描。这是在上一个报告间隔中启动的使用此TC作为事务协调器的有序索引扫描数。

  • Local reads.  This is the number of primary-key read operations performed using a transaction coordinator on a node that also holds the primary replica of the record. This count can also be obtained from the LOCAL_READS counter in the ndbinfo.counters table.

    本地读取。这是在同时保存记录主副本的节点上使用事务协调器执行的主键读取操作数。此计数也可以从ndbinfo.counters表中的本地读取计数器获得。

  • Local writes.  This contains the number of primary-key read operations that were performed using a transaction coordinator on a node that also holds the primary replica of the record. This count can also be obtained from the LOCAL_WRITES counter in the ndbinfo.counters table.

    本地写入。它包含在同时保存记录主副本的节点上使用事务协调器执行的主键读取操作数。此计数也可以从ndbinfo.counters表中的本地写入计数器获取。

Local query handler statistics (Operations).  There is 1 cluster event per local query handler block (that is, 1 per data node process). Operations are recorded in the LQH where the data they are operating on resides.

本地查询处理程序统计信息(操作)。每个本地查询处理程序块有一个集群事件(即,每个数据节点进程有一个集群事件)。操作记录在其操作数据所在的lqh中。

Note

A single transaction may operate on data stored in multiple LQH blocks.

单个事务可以对存储在多个lqh块中的数据进行操作。

The Operations statistic provides the number of local operations performed by this LQH block in the last reporting interval, and includes all types of read and write operations (insert, update, write, and delete operations). This also includes operations used to replicate writes. For example, in a 2-replica cluster, the write to the primary replica is recorded in the primary LQH, and the write to the backup will be recorded in the backup LQH. Unique key operations may result in multiple local operations; however, this does not include local operations generated as a result of a table scan or ordered index scan, which are not counted.

操作统计信息提供此LQH块在上一个报告间隔内执行的本地操作数,并包括所有类型的读写操作(插入、更新、写入和删除操作)。这还包括用于复制写入的操作。例如,在2副本群集中,对主副本的写入将记录在主LQH中,对备份的写入将记录在备份LQH中。唯一键操作可能会导致多个本地操作;但是,这不包括表扫描或顺序索引扫描生成的本地操作,这些操作不会被计算在内。

Process scheduler statistics.  In addition to the statistics reported by the transaction coordinator and local query handler, each ndbd process has a scheduler which also provides useful metrics relating to the performance of an NDB Cluster. This scheduler runs in an infinite loop; during each loop the scheduler performs the following tasks:

进程调度程序统计信息。除了事务协调器和本地查询处理程序报告的统计信息外,每个ndbd进程都有一个调度器,它还提供与ndb集群性能相关的有用度量。此计划程序在无限循环中运行;在每个循环中,计划程序执行以下任务:

  1. Read any incoming messages from sockets into a job buffer.

    将来自套接字的任何传入消息读入作业缓冲区。

  2. Check whether there are any timed messages to be executed; if so, put these into the job buffer as well.

    检查是否有任何要执行的定时消息;如果有,也将这些消息放入作业缓冲区。

  3. Execute (in a loop) any messages in the job buffer.

    执行(在循环中)作业缓冲区中的任何消息。

  4. Send any distributed messages that were generated by executing the messages in the job buffer.

    发送通过在作业缓冲区中执行消息生成的任何分布式消息。

  5. Wait for any new incoming messages.

    等待任何新的传入消息。

Process scheduler statistics include the following:

进程计划程序统计信息包括以下内容:

  • Mean Loop Counter.  This is the number of loops executed in the third step from the preceding list. This statistic increases in size as the utilization of the TCP/IP buffer improves. You can use this to monitor changes in performance as you add new data node processes.

    平均循环计数器。这是在前面列表的第三步中执行的循环数。随着TCP/IP缓冲区利用率的提高,此统计信息的大小也会增加。在添加新的数据节点进程时,可以使用此选项监视性能的更改。

  • Mean send size and Mean receive size.  These statistics enable you to gauge the efficiency of, respectively writes and reads between nodes. The values are given in bytes. Higher values mean a lower cost per byte sent or received; the maximum value is 64K.

    平均发送大小和平均接收大小。这些统计数据使您能够分别衡量节点之间的写入和读取效率。这些值以字节为单位。较高的值意味着发送或接收的每个字节的成本较低;最大值是64K。

To cause all cluster log statistics to be logged, you can use the following command in the NDB management client:

要记录所有群集日志统计信息,可以在ndb管理客户端中使用以下命令:

ndb_mgm> ALL CLUSTERLOG STATISTICS=15
Note

Setting the threshold for STATISTICS to 15 causes the cluster log to become very verbose, and to grow quite rapidly in size, in direct proportion to the number of cluster nodes and the amount of activity in the NDB Cluster.

将统计阈值设置为15会导致群集日志变得非常冗长,并且大小会快速增长,这与群集节点的数量和ndb群集中的活动量成正比。

For more information about NDB Cluster management client commands relating to logging and reporting, see Section 21.5.6.1, “NDB Cluster Logging Management Commands”.

有关与日志和报告相关的ndb cluster management client命令的更多信息,请参阅21.5.6.1节“ndb cluster logging management commands”。

21.5.7 NDB Cluster Log Messages

This section contains information about the messages written to the cluster log in response to different cluster log events. It provides additional, more specific information on NDB transporter errors.

本节包含有关响应不同群集日志事件而写入群集日志的消息的信息。它提供了有关ndb传输程序错误的更多、更具体的信息。

21.5.7.1 NDB Cluster: Messages in the Cluster Log

The following table lists the most common NDB cluster log messages. For information about the cluster log, log events, and event types, see Section 21.5.6, “Event Reports Generated in NDB Cluster”. These log messages also correspond to log event types in the MGM API; see The Ndb_logevent_type Type, for related information of interest to Cluster API developers.

下表列出了最常见的ndb群集日志消息。有关集群日志、日志事件和事件类型的信息,请参阅21.5.6节“在ndb集群中生成的事件报告”。这些日志消息还对应于mgm api中的日志事件类型;有关集群api开发人员感兴趣的相关信息,请参阅ndb_log event_type type。

Table 21.355 Common NDB cluster log messages

表21.355常见的ndb集群日志消息

Log Message Description Event Name Event Type Priority Severity
Node mgm_node_id: Node data_node_id Connected The data node having node ID node_id has connected to the management server (node mgm_node_id). Connected Connection 8 INFO
Node mgm_node_id: Node data_node_id Disconnected The data node having node ID data_node_id has disconnected from the management server (node mgm_node_id). Disconnected Connection 8 ALERT
Node data_node_id: Communication to Node api_node_id closed The API node or SQL node having node ID api_node_id is no longer communicating with data node data_node_id. CommunicationClosed Connection 8 INFO
Node data_node_id: Communication to Node api_node_id opened The API node or SQL node having node ID api_node_id is now communicating with data node data_node_id. CommunicationOpened Connection 8 INFO
Node mgm_node_id: Node api_node_id: API version The API node having node ID api_node_id has connected to management node mgm_node_id using NDB API version version (generally the same as the MySQL version number). ConnectedApiVersion Connection 8 INFO
Node node_id: Global checkpoint gci started A global checkpoint with the ID gci has been started; node node_id is the master responsible for this global checkpoint. GlobalCheckpointStarted Checkpoint 9 INFO
Node node_id: Global checkpoint gci completed The global checkpoint having the ID gci has been completed; node node_id was the master responsible for this global checkpoint. GlobalCheckpointCompleted Checkpoint 10 INFO
Node node_id: Local checkpoint lcp started. Keep GCI = current_gci oldest restorable GCI = old_gci The local checkpoint having sequence ID lcp has been started on node node_id. The most recent GCI that can be used has the index current_gci, and the oldest GCI from which the cluster can be restored has the index old_gci. LocalCheckpointStarted Checkpoint 7 INFO
Node node_id: Local checkpoint lcp completed The local checkpoint having sequence ID lcp on node node_id has been completed. LocalCheckpointCompleted Checkpoint 8 INFO
Node node_id: Local Checkpoint stopped in CALCULATED_KEEP_GCI The node was unable to determine the most recent usable GCI. LCPStoppedInCalcKeepGci Checkpoint 0 ALERT
Node node_id: Table ID = table_id, fragment ID = fragment_id has completed LCP on Node node_id maxGciStarted: started_gci maxGciCompleted: completed_gci A table fragment has been checkpointed to disk on node node_id. The GCI in progress has the index started_gci, and the most recent GCI to have been completed has the index completed_gci. LCPFragmentCompleted Checkpoint 11 INFO
Node node_id: ACC Blocked num_1 and TUP Blocked num_2 times last second Undo logging is blocked because the log buffer is close to overflowing. UndoLogBlocked Checkpoint 7 INFO
Node node_id: Start initiated version Data node node_id, running NDB version version, is beginning its startup process. NDBStartStarted StartUp 1 INFO
Node node_id: Started version Data node node_id, running NDB version version, has started successfully. NDBStartCompleted StartUp 1 INFO
Node node_id: STTORRY received after restart finished The node has received a signal indicating that a cluster restart has completed. STTORRYRecieved StartUp 15 INFO
Node node_id: Start phase phase completed (type) The node has completed start phase phase of a type start. For a listing of start phases, see Section 21.5.1, “Summary of NDB Cluster Start Phases”. (type is one of initial, system, node, initial node, or <Unknown>.) StartPhaseCompleted StartUp 4 INFO
Node node_id: CM_REGCONF president = president_id, own Node = own_id, our dynamic id = dynamic_id Node president_id has been selected as president. own_id and dynamic_id should always be the same as the ID (node_id) of the reporting node. CM_REGCONF StartUp 3 INFO
Node node_id: CM_REGREF from Node president_id to our Node node_id. Cause = cause The reporting node (ID node_id) was unable to accept node president_id as president. The cause of the problem is given as one of Busy, Election with wait = false, Not president, Election without selecting new candidate, or No such cause. CM_REGREF StartUp 8 INFO
Node node_id: We are Node own_id with dynamic ID dynamic_id, our left neighbor is Node id_1, our right is Node id_2 The node has discovered its neighboring nodes in the cluster (node id_1 and node id_2). node_id, own_id, and dynamic_id should always be the same; if they are not, this indicates a serious misconfiguration of the cluster nodes. FIND_NEIGHBOURS StartUp 8 INFO
Node node_id: type shutdown initiated The node has received a shutdown signal. The type of shutdown is either Cluster or Node. NDBStopStarted StartUp 1 INFO
Node node_id: Node shutdown completed [, action] [Initiated by signal signal.] The node has been shut down. This report may include an action, which if present is one of restarting, no start, or initial. The report may also include a reference to an NDB Protocol signal; for possible signals, refer to Operations and Signals. NDBStopCompleted StartUp 1 INFO
Node node_id: Forced node shutdown completed [, action]. [Occurred during startphase start_phase.] [ Initiated by signal.] [Caused by error error_code: 'error_message(error_classification). error_status'. [(extra info extra_code)]] The node has been forcibly shut down. The action (one of restarting, no start, or initial) subsequently being taken, if any, is also reported. If the shutdown occurred while the node was starting, the report includes the start_phase during which the node failed. If this was a result of a signal sent to the node, this information is also provided (see Operations and Signals, for more information). If the error causing the failure is known, this is also included; for more information about NDB error messages and classifications, see NDB Cluster API Errors. NDBStopForced StartUp 1 ALERT
Node node_id: Node shutdown aborted The node shutdown process was aborted by the user. NDBStopAborted StartUp 1 INFO
Node node_id: StartLog: [GCI Keep: keep_pos LastCompleted: last_pos NewestRestorable: restore_pos] This reports global checkpoints referenced during a node start. The redo log prior to keep_pos is dropped. last_pos is the last global checkpoint in which data node the participated; restore_pos is the global checkpoint which is actually used to restore all data nodes. StartREDOLog StartUp 4 INFO
startup_message [Listed separately; see below.] There are a number of possible startup messages that can be logged under different circumstances. These are listed separately; see Section 21.5.7.2, “NDB Cluster Log Startup Messages”. StartReport StartUp 4 INFO
Node node_id: Node restart completed copy of dictionary information Copying of data dictionary information to the restarted node has been completed. NR_CopyDict NodeRestart 8 INFO
Node node_id: Node restart completed copy of distribution information Copying of data distribution information to the restarted node has been completed. NR_CopyDistr NodeRestart 8 INFO
Node node_id: Node restart starting to copy the fragments to Node node_id Copy of fragments to starting data node node_id has begun NR_CopyFragsStarted NodeRestart 8 INFO
Node node_id: Table ID = table_id, fragment ID = fragment_id have been copied to Node node_id Fragment fragment_id from table table_id has been copied to data node node_id NR_CopyFragDone NodeRestart 10 INFO
Node node_id: Node restart completed copying the fragments to Node node_id Copying of all table fragments to restarting data node node_id has been completed NR_CopyFragsCompleted NodeRestart 8 INFO
Node node_id: Node node1_id completed failure of Node node2_id Data node node1_id has detected the failure of data node node2_id NodeFailCompleted NodeRestart 8 ALERT
All nodes completed failure of Node node_id All (remaining) data nodes have detected the failure of data node node_id NodeFailCompleted NodeRestart 8 ALERT
Node failure of node_idblock completed The failure of data node node_id has been detected in the blockNDB kernel block, where block is 1 of DBTC, DBDICT, DBDIH, or DBLQH; for more information, see NDB Kernel Blocks NodeFailCompleted NodeRestart 8 ALERT
Node mgm_node_id: Node data_node_id has failed. The Node state at failure was state_code A data node has failed. Its state at the time of failure is described by an arbitration state code state_code: possible state code values can be found in the file include/kernel/signaldata/ArbitSignalData.hpp. NODE_FAILREP NodeRestart 8 ALERT
President restarts arbitration thread [state=state_code] or Prepare arbitrator node node_id [ticket=ticket_id] or Receive arbitrator node node_id [ticket=ticket_id] or Started arbitrator node node_id [ticket=ticket_id] or Lost arbitrator node node_id - process failure [state=state_code] or Lost arbitrator node node_id - process exit [state=state_code] or Lost arbitrator node node_id - error_message [state=state_code] This is a report on the current state and progress of arbitration in the cluster. node_id is the node ID of the management node or SQL node selected as the arbitrator. state_code is an arbitration state code, as found in include/kernel/signaldata/ArbitSignalData.hpp. When an error has occurred, an error_message, also defined in ArbitSignalData.hpp, is provided. ticket_id is a unique identifier handed out by the arbitrator when it is selected to all the nodes that participated in its selection; this is used to ensure that each node requesting arbitration was one of the nodes that took part in the selection process. ArbitState NodeRestart 6 INFO
Arbitration check lost - less than 1/2 nodes left or Arbitration check won - all node groups and more than 1/2 nodes left or Arbitration check won - node group majority or Arbitration check lost - missing node group or Network partitioning - arbitration required or Arbitration won - positive reply from node node_id or Arbitration lost - negative reply from node node_id or Network partitioning - no arbitrator available or Network partitioning - no arbitrator configured or Arbitration failure - error_message [state=state_code] This message reports on the result of arbitration. In the event of arbitration failure, an error_message and an arbitration state_code are provided; definitions for both of these are found in include/kernel/signaldata/ArbitSignalData.hpp. ArbitResult NodeRestart 2 ALERT
Node node_id: GCP Take over started This node is attempting to assume responsibility for the next global checkpoint (that is, it is becoming the master node) GCP_TakeoverStarted NodeRestart 7 INFO
Node node_id: GCP Take over completed This node has become the master, and has assumed responsibility for the next global checkpoint GCP_TakeoverCompleted NodeRestart 7 INFO
Node node_id: LCP Take over started This node is attempting to assume responsibility for the next set of local checkpoints (that is, it is becoming the master node) LCP_TakeoverStarted NodeRestart 7 INFO
Node node_id: LCP Take over completed This node has become the master, and has assumed responsibility for the next set of local checkpoints LCP_TakeoverCompleted NodeRestart 7 INFO
Node node_id: Trans. Count = transactions, Commit Count = commits, Read Count = reads, Simple Read Count = simple_reads, Write Count = writes, AttrInfo Count = AttrInfo_objects, Concurrent Operations = concurrent_operations, Abort Count = aborts, Scans = scans, Range scans = range_scans This report of transaction activity is given approximately once every 10 seconds TransReportCounters Statistic 8 INFO
Node node_id: Operations=operations Number of operations performed by this node, provided approximately once every 10 seconds OperationReportCounters Statistic 8 INFO
Node node_id: Table with ID = table_id created A table having the table ID shown has been created TableCreated Statistic 7 INFO
Node node_id: Mean loop Counter in doJob last 8192 times = count JobStatistic Statistic 9 INFO
Mean send size to Node = node_id last 4096 sends = bytes bytes This node is sending an average of bytes bytes per send to node node_id SendBytesStatistic Statistic 9 INFO
Mean receive size to Node = node_id last 4096 sends = bytes bytes This node is receiving an average of bytes of data each time it receives data from node node_id ReceiveBytesStatistic Statistic 9 INFO
Node node_id: Data usage is data_memory_percentage% (data_pages_used 32K pages of total data_pages_total) / Node node_id: Index usage is index_memory_percentage% (index_pages_used 8K pages of total index_pages_total) This report is generated when a DUMP 1000 command is issued in the cluster management client MemoryUsage Statistic 5 INFO
Node node1_id: Transporter to node node2_id reported error error_code: error_message A transporter error occurred while communicating with node node2_id; for a listing of transporter error codes and messages, see NDB Transporter Errors, in MySQL NDB Cluster Internals Manual TransporterError Error 2 ERROR
Node node1_id: Transporter to node node2_id reported error error_code: error_message A warning of a potential transporter problem while communicating with node node2_id; for a listing of transporter error codes and messages, see NDB Transporter Errors, for more information TransporterWarning Error 8 WARNING
Node node1_id: Node node2_id missed heartbeat heartbeat_id This node missed a heartbeat from node node2_id MissedHeartbeat Error 8 WARNING
Node node1_id: Node node2_id declared dead due to missed heartbeat This node has missed at least 3 heartbeats from node node2_id, and so has declared that node dead DeadDueToHeartbeat Error 8 ALERT
Node node1_id: Node Sent Heartbeat to node = node2_id This node has sent a heartbeat to node node2_id SentHeartbeat Info 12 INFO
(NDB 7.5.0 and earlier:) Node node_id: Event buffer status: used=bytes_used (percent_used%) alloc=bytes_allocated (percent_available%) max=bytes_available apply_epoch=latest_restorable_epoch latest_epoch=latest_epoch This report is seen during heavy event buffer usage, for example, when many updates are being applied in a relatively short period of time; the report shows the number of bytes and the percentage of event buffer memory used, the bytes allocated and percentage still available, and the latest and latest restorable epochs EventBufferStatus Info 7 INFO
(NDB 7.5.1 and later:) Node node_id: Event buffer status (object_id): used=bytes_used (percent_used% of alloc) alloc=bytes_allocated max=bytes_available latest_consumed_epoch=latest_consumed_epoch latest_buffered_epoch=latest_buffered_epoch report_reason=report_reason This report is seen during heavy event buffer usage, for example, when many updates are being applied in a relatively short period of time; the report shows the number of bytes and the percentage of event buffer memory used, the bytes allocated and percentage still available, and the latest buffered and consumed epochs; for more information, see Section 21.5.7.3, “Event Buffer Reporting in the Cluster Log” EventBufferStatus2 Info 7 INFO
Node node_id: Entering single user mode, Node node_id: Entered single user mode Node API_node_id has exclusive access, Node node_id: Entering single user mode These reports are written to the cluster log when entering and exiting single user mode; API_node_id is the node ID of the API or SQL having exclusive access to the cluster (for more information, see Section 21.5.8, “NDB Cluster Single User Mode”); the message Unknown single user report API_node_id indicates an error has taken place and should never be seen in normal operation SingleUser Info 7 INFO
Node node_id: Backup backup_id started from node mgm_node_id A backup has been started using the management node having mgm_node_id; this message is also displayed in the cluster management client when the START BACKUP command is issued; for more information, see Section 21.5.3.2, “Using The NDB Cluster Management Client to Create a Backup” BackupStarted Backup 7 INFO
Node node_id: Backup backup_id started from node mgm_node_id completed. StartGCP: start_gcp StopGCP: stop_gcp #Records: records #LogRecords: log_records Data: data_bytes bytes Log: log_bytes bytes The backup having the ID backup_id has been completed; for more information, see Section 21.5.3.2, “Using The NDB Cluster Management Client to Create a Backup” BackupCompleted Backup 7 INFO
Node node_id: Backup request from mgm_node_id failed to start. Error: error_code The backup failed to start; for error codes, see MGM API Errors BackupFailedToStart Backup 7 ALERT
Node node_id: Backup backup_id started from mgm_node_id has been aborted. Error: error_code The backup was terminated after starting, possibly due to user intervention BackupAborted Backup 7 ALERT

21.5.7.2 NDB Cluster Log Startup Messages

Possible startup messages with descriptions are provided in the following list:

以下列表中提供了可能的启动消息及其说明:

  • Initial start, waiting for %s to connect, nodes [ all: %s connected: %s no-wait: %s ]

    初始启动,等待%s连接,节点[所有连接的%s连接的%s没有等待的%s]

  • Waiting until nodes: %s connects, nodes [ all: %s connected: %s no-wait: %s ]

    等待,直到节点%s连接,节点[所有连接的%s连接的%s不等待的%s]

  • Waiting %u sec for nodes %s to connect, nodes [ all: %s connected: %s no-wait: %s ]

    正在等待%u秒节点%s连接,节点[所有连接的%s连接的%s没有等待的%s]

  • Waiting for non partitioned start, nodes [ all: %s connected: %s missing: %s no-wait: %s ]

    正在等待非分区启动,节点[所有连接的%s缺少的%s没有等待的%s]

  • Waiting %u sec for non partitioned start, nodes [ all: %s connected: %s missing: %s no-wait: %s ]

    正在等待%u秒非分区启动,节点[所有连接的%s缺少的%s没有等待的%s]

  • Initial start with nodes %s [ missing: %s no-wait: %s ]

    以节点%s开始初始[缺少%s无等待%s]

  • Start with all nodes %s

    从所有节点%s开始

  • Start with nodes %s [ missing: %s no-wait: %s ]

    从节点%s开始[缺少%s不等待%s]

  • Start potentially partitioned with nodes %s [ missing: %s no-wait: %s ]

    启动可能已分区的节点%s[缺少%s无等待%s]

  • Unknown startreport: 0x%x [ %s %s %s %s ]

    未知的StartReport:x%x[%s%s%s%s]

21.5.7.3 Event Buffer Reporting in the Cluster Log

NDB uses one or more memory buffers for events received from the data nodes. There is one such buffer for each Ndb object subscribing to table events, which means that there are usually two buffers for each mysqld performing binary logging (one buffer for schema events, and one for data events). Each buffer contains epochs made up of events. These events consist of operation types (insert, update, delete) and row data (before and after images plus metadata).

ndb对从数据节点接收的事件使用一个或多个内存缓冲区。对于订阅表事件的每个ndb对象都有一个这样的缓冲区,这意味着执行二进制日志记录的每个mysqld通常有两个缓冲区(一个用于架构事件,一个用于数据事件)。每个缓冲区包含由事件组成的时间段。这些事件包括操作类型(插入、更新、删除)和行数据(在图像和元数据之前和之后)。

NDB generates messages in the cluster log to describe the state of these buffers. Although these reports appear in the cluster log, they refer to buffers on API nodes (unlike most other cluster log messages, which are generated by data nodes). These messages and the data structures underlying them were changed significantly in NDB 7.5.1, with the addition of the NDB_LE_EventBufferStatus2 event type and the ndb_logevent_EventBufferStatus2 data structure (see The Ndb_logevent_type Type). The remainder of this discussion focuses on the implementation based on NDB_LE_EventBufferStatus2.

ndb在集群日志中生成消息来描述这些缓冲区的状态。尽管这些报告出现在集群日志中,但它们引用了api节点上的缓冲区(与大多数由数据节点生成的其他集群日志消息不同)。在ndb 7.5.1中,随着ndb le_eventbufferstatus2事件类型和ndb_logevent_eventbufferstatus2数据结构(参见ndb_logevent_类型)的添加,这些消息及其底层的数据结构发生了显著变化。本讨论的其余部分集中于基于ndb le_le_eventbufferstatus2的实现。

Event buffer logging reports in the cluster log use the format shown here:

群集日志中的事件缓冲区日志记录报告使用以下格式:

Node node_id: Event buffer status (object_id):
used=bytes_used (percent_used% of alloc)
alloc=bytes_allocated (percent_alloc% of max) max=bytes_available
latest_consumed_epoch=latest_consumed_epoch
latest_buffered_epoch=latest_buffered_epoch
report_reason=report_reason

The fields making up this report are listed here, with descriptions:

下面列出了构成此报表的字段,并提供了说明:

  • node_id: ID of the node where the report originated.

    node_id:生成报告的节点的id。

  • object_id: ID of the Ndb object where the report originated.

    object_id:生成报告的ndb对象的id。

  • bytes_used: Number of bytes used by the buffer.

    bytes_used:缓冲区使用的字节数。

  • percent_used: Percentage of allocated bytes used.

    已使用百分比:已使用的已分配字节的百分比。

  • bytes_allocated: Number of bytes allocated to this buffer.

    已分配的字节数:分配给此缓冲区的字节数。

  • percent_alloc: Percentage of available bytes used; not printed if ndb_eventbuffer_max_alloc is equal to 0 (unlimited).

    percent_alloc:使用的可用字节百分比;如果ndb_eventbuffer_max_alloc等于0(无限制),则不打印。

  • bytes_available: Number of bytes available; this is 0 if ndb_eventbuffer_max_alloc is 0 (unlimited).

    可用字节数:可用字节数;如果ndb_eventbuffer_max_alloc为0(无限制),则此值为0。

  • latest_consumed_epoch: The epoch most recently consumed to completion. (In NDB API applications, this is done by calling nextEvent().)

    最近消耗的纪元:最近消耗到完成的纪元。(在ndb api应用程序中,这是通过调用nextEvent()完成的。)

  • latest_buffered_epoch: The epoch most recently buffered (completely) in the event buffer.

    最新缓冲纪元:事件缓冲区中最近缓冲(完全)的纪元。

  • report_reason: The reason for making the report. Possible reasons are shown later in this section.

    报告原因:做报告的原因。可能的原因将在本节后面介绍。

The latest_consumed_epoch and latest_buffered_epoch fields correspond, respectively, to the apply_gci and latest_gci fields of the old-style event buffer logging messages used prior to NDB 7.5.1.

Latest_Consumed_Epoch和Latest_Buffered_Epoch字段分别对应于在NDB 7.5.1之前使用的旧式事件缓冲日志消息的Apply_GCI和Latest_GCI字段。

Possible reasons for reporting are described in the following list:

报告的可能原因如下表所示:

  • ENOUGH_FREE_EVENTBUFFER: The event buffer has sufficient space.

    足够的空闲事件缓冲区:事件缓冲区有足够的空间。

    LOW_FREE_EVENTBUFFER: The event buffer is running low on free space.

    low_free_event buffer:事件缓冲区的可用空间不足。

    The threshold free percentage level triggering these reports can be adjusted by setting the ndb_report_thresh_binlog_mem_usage server variable.

    触发这些报告的无阈值百分比级别可以通过设置ndb_report_thresh_binlog_mem_usage server变量进行调整。

  • BUFFERED_EPOCHS_OVER_THRESHOLD: Whether the number of buffered epochs has exceeded the configured threshold. This number is the difference between the latest epoch that has been received in its entirety and the epoch that has most recently been consumed (in NDB API applications, this is done by calling nextEvent() or nextEvent2()). The report is generated every second until the number of buffered epochs goes below the threshold, which can be adjusted by setting the ndb_report_thresh_binlog_epoch_slip server variable. You can also adjust the threshold in NDB API applications by calling setEventBufferQueueEmptyEpoch().

    缓冲区超过阈值:缓冲区的数量是否超过了配置的阈值。这个数字是完整接收的最新epoch与最近使用的epoch之间的差异(在ndb api应用程序中,这是通过调用nextEvent()或nextEvent2()来实现的)。每秒生成一次报告,直到缓冲的时间段数低于阈值,可以通过设置ndb_report_thresh_binlog_epoch_slip server变量来调整阈值。还可以通过调用setEventBufferQueueEmptyEPoch()来调整ndb api应用程序中的阈值。

  • PARTIALLY_DISCARDING: Event buffer memory is exhausted—that is, 100% of ndb_eventbuffer_max_alloc has been used. Any partially buffered epoch is buffered to completion even is usage exceeds 100%, but any new epochs received are discarded. This means that a gap has occurred in the event stream.

    部分丢弃:事件缓冲存储器被耗尽,即已经使用了100%的NdByEnthBuffrE.Max SoLoc。任何部分缓冲的epoch都会缓冲到完成,即使使用率超过100%,但接收到的任何新epoch都会被丢弃。这意味着事件流中出现了间隙。

  • COMPLETELY_DISCARDING: No epochs are buffered.

    完全丢弃:不缓冲任何时间段。

  • PARTIALLY_BUFFERING: The buffer free percentage following the gap has risen to the threshold, which can be set in the mysql client using the ndb_eventbuffer_free_percent server system variable or in NDB API applications by calling set_eventbuffer_free_percent(). New epochs are buffered. Epochs that could not be completed due to the gap are discarded.

    部分缓存:在GAP之后,无缓冲的百分比已经上升到阈值,可以使用NdByEnvestBuffelyFruty%服务器系统变量或NDB API应用程序通过调用StIONEngEffEffyFixFiel%()来设置MySQL客户端。新时代被缓冲。由于间隙而无法完成的时间段将被丢弃。

  • COMPLETELY_BUFFERING: All epochs received are being buffered, which means that there is sufficient event buffer memory. The gap in the event stream has been closed.

    完全缓冲:接收到的所有阶段都被缓冲,这意味着有足够的事件缓冲内存。事件流中的间隙已关闭。

21.5.7.4 NDB Cluster: NDB Transporter Errors

This section lists error codes, names, and messages that are written to the cluster log in the event of transporter errors.

本节列出在发生传输程序错误时写入群集日志的错误代码、名称和消息。

Table 21.356 Error codes generated by transporter errors

表21.356传输错误生成的错误代码

Error Code Error Name Error Text
0x00 TE_NO_ERROR No error
0x01 TE_ERROR_CLOSING_SOCKET Error found during closing of socket
0x02 TE_ERROR_IN_SELECT_BEFORE_ACCEPT Error found before accept. The transporter will retry
0x03 TE_INVALID_MESSAGE_LENGTH Error found in message (invalid message length)
0x04 TE_INVALID_CHECKSUM Error found in message (checksum)
0x05 TE_COULD_NOT_CREATE_SOCKET Error found while creating socket(can't create socket)
0x06 TE_COULD_NOT_BIND_SOCKET Error found while binding server socket
0x07 TE_LISTEN_FAILED Error found while listening to server socket
0x08 TE_ACCEPT_RETURN_ERROR Error found during accept(accept return error)
0x0b TE_SHM_DISCONNECT The remote node has disconnected
0x0c TE_SHM_IPC_STAT Unable to check shm segment
0x0d TE_SHM_UNABLE_TO_CREATE_SEGMENT Unable to create shm segment
0x0e TE_SHM_UNABLE_TO_ATTACH_SEGMENT Unable to attach shm segment
0x0f TE_SHM_UNABLE_TO_REMOVE_SEGMENT Unable to remove shm segment
0x10 TE_TOO_SMALL_SIGID Sig ID too small
0x11 TE_TOO_LARGE_SIGID Sig ID too large
0x12 TE_WAIT_STACK_FULL Wait stack was full
0x13 TE_RECEIVE_BUFFER_FULL Receive buffer was full
0x14 TE_SIGNAL_LOST_SEND_BUFFER_FULL Send buffer was full,and trying to force send fails
0x15 TE_SIGNAL_LOST Send failed for unknown reason(signal lost)
0x16 TE_SEND_BUFFER_FULL The send buffer was full, but sleeping for a while solved
0x21 TE_SHM_IPC_PERMANENT Shm ipc Permanent error

Note

Transporter error codes 0x17 through 0x20 and 0x22 are reserved for SCI connections, which are not supported in this version of NDB Cluster, and so are not included here.

传输程序错误代码0x17到0x20和0x22是为SCI连接保留的,此版本的NDB群集不支持这些连接,因此此处不包括。

21.5.8 NDB Cluster Single User Mode

Single user mode enables the database administrator to restrict access to the database system to a single API node, such as a MySQL server (SQL node) or an instance of ndb_restore. When entering single user mode, connections to all other API nodes are closed gracefully and all running transactions are aborted. No new transactions are permitted to start.

单用户模式允许数据库管理员将对数据库系统的访问限制为单个api节点,例如mysql服务器(sql节点)或ndb_restore实例。当进入单用户模式时,与所有其他api节点的连接将正常关闭,所有正在运行的事务都将中止。不允许启动新的交易。

Once the cluster has entered single user mode, only the designated API node is granted access to the database.

一旦集群进入单用户模式,只有指定的api节点被授予访问数据库的权限。

You can use the ALL STATUS command in the ndb_mgm client to see when the cluster has entered single user mode. You can also check the status column of the ndbinfo.nodes table (see Section 21.5.10.28, “The ndbinfo nodes Table”, for more information).

您可以使用ndb-mgm客户机中的all status命令查看集群何时进入单用户模式。您还可以检查ndbinfo.nodes表的status列(有关详细信息,请参阅21.5.10.28节“ndbinfo nodes表”)。

Example:

例子:

ndb_mgm> ENTER SINGLE USER MODE 5

After this command has executed and the cluster has entered single user mode, the API node whose node ID is 5 becomes the cluster's only permitted user.

执行此命令并且集群进入单用户模式后,节点id为5的api节点将成为集群唯一允许的用户。

The node specified in the preceding command must be an API node; attempting to specify any other type of node will be rejected.

前面命令中指定的节点必须是api节点;尝试指定任何其他类型的节点将被拒绝。

Note

When the preceding command is invoked, all transactions running on the designated node are aborted, the connection is closed, and the server must be restarted.

调用上述命令时,指定节点上运行的所有事务都将中止,连接将关闭,服务器必须重新启动。

The command EXIT SINGLE USER MODE changes the state of the cluster's data nodes from single user mode to normal mode. API nodes—such as MySQL Servers—waiting for a connection (that is, waiting for the cluster to become ready and available), are again permitted to connect. The API node denoted as the single-user node continues to run (if still connected) during and after the state change.

exit single user mode命令将集群数据节点的状态从单用户模式更改为普通模式。api节点,如mysql服务器,等待连接(即等待集群准备就绪并可用),再次被允许连接。表示为单用户节点的api节点在状态更改期间和之后继续运行(如果仍然连接)。

Example:

例子:

ndb_mgm> EXIT SINGLE USER MODE

There are two recommended ways to handle a node failure when running in single user mode:

在单用户模式下运行时,有两种处理节点故障的建议方法:

  • Method 1:

    方法1:

    1. Finish all single user mode transactions

      完成所有单用户模式事务

    2. Issue the EXIT SINGLE USER MODE command

      发出退出单用户模式命令

    3. Restart the cluster's data nodes

      重新启动群集的数据节点

  • Method 2:

    方法2:

    Restart storage nodes prior to entering single user mode.

    在进入单用户模式之前重新启动存储节点。

21.5.9 Quick Reference: NDB Cluster SQL Statements

This section discusses several SQL statements that can prove useful in managing and monitoring a MySQL server that is connected to an NDB Cluster, and in some cases provide information about the cluster itself.

本节讨论了一些sql语句,这些语句在管理和监视连接到ndb集群的mysql服务器时非常有用,在某些情况下还提供了集群本身的信息。

  • SHOW ENGINE NDB STATUS, SHOW ENGINE NDBCLUSTER STATUS

    显示引擎ndb状态,显示引擎ndbcluster状态

    The output of this statement contains information about the server's connection to the cluster, creation and usage of NDB Cluster objects, and binary logging for NDB Cluster replication.

    此语句的输出包含有关服务器与群集的连接、ndb群集对象的创建和使用以及ndb群集复制的二进制日志记录的信息。

    See Section 13.7.5.15, “SHOW ENGINE Syntax”, for a usage example and more detailed information.

    有关用法示例和更详细的信息,请参见第13.7.5.15节“显示引擎语法”。

  • SHOW ENGINES

    显示引擎

    This statement can be used to determine whether or not clustering support is enabled in the MySQL server, and if so, whether it is active.

    此语句可用于确定mysql服务器中是否启用了群集支持,如果启用了,则确定它是否处于活动状态。

    See Section 13.7.5.16, “SHOW ENGINES Syntax”, for more detailed information.

    有关更多详细信息,请参见第13.7.5.16节“显示引擎语法”。

    Note

    This statement does not support a LIKE clause. However, you can use LIKE to filter queries against the INFORMATION_SCHEMA.ENGINES table, as discussed in the next item.

    此语句不支持LIKE子句。但是,您可以使用like根据information\schema.engines表筛选查询,如下一项所述。

  • SELECT * FROM INFORMATION_SCHEMA.ENGINES [WHERE ENGINE LIKE 'NDB%']

    从information_schema.engines中选择*[其中引擎类似“ndb%”]

    This is the equivalent of SHOW ENGINES, but uses the ENGINES table of the INFORMATION_SCHEMA database. Unlike the case with the SHOW ENGINES statement, it is possible to filter the results using a LIKE clause, and to select specific columns to obtain information that may be of use in scripts. For example, the following query shows whether the server was built with NDB support and, if so, whether it is enabled:

    这相当于show engines,但使用信息模式数据库的engines表。与show engines语句的情况不同,可以使用like子句筛选结果,并选择特定列以获取可能在脚本中使用的信息。例如,以下查询显示服务器是否是使用ndb支持构建的,如果是,则显示服务器是否已启用:

    mysql> SELECT SUPPORT FROM INFORMATION_SCHEMA.ENGINES
        ->   WHERE ENGINE LIKE 'NDB%';
    +---------+
    | support |
    +---------+
    | ENABLED |
    +---------+
    

    See Section 24.7, “The INFORMATION_SCHEMA ENGINES Table”, for more information.

    有关更多信息,请参见第24.7节“信息模式引擎表”。

  • SHOW VARIABLES LIKE 'NDB%'

    显示类似“ndb%”的变量

    This statement provides a list of most server system variables relating to the NDB storage engine, and their values, as shown here:

    此语句提供与ndb存储引擎相关的大多数服务器系统变量及其值的列表,如下所示:

    mysql> SHOW VARIABLES LIKE 'NDB%';
    +-------------------------------------+-------+
    | Variable_name                       | Value |
    +-------------------------------------+-------+
    | ndb_autoincrement_prefetch_sz       | 32    |
    | ndb_cache_check_time                | 0     |
    | ndb_extra_logging                   | 0     |
    | ndb_force_send                      | ON    |
    | ndb_index_stat_cache_entries        | 32    |
    | ndb_index_stat_enable               | OFF   |
    | ndb_index_stat_update_freq          | 20    |
    | ndb_report_thresh_binlog_epoch_slip | 3     |
    | ndb_report_thresh_binlog_mem_usage  | 10    |
    | ndb_use_copying_alter_table         | OFF   |
    | ndb_use_exact_count                 | ON    |
    | ndb_use_transactions                | ON    |
    +-------------------------------------+-------+
    

    See Section 5.1.7, “Server System Variables”, for more information.

    有关详细信息,请参阅第5.1.7节“服务器系统变量”。

  • SELECT * FROM INFORMATION_SCHEMA.GLOBAL_VARIABLES WHERE VARIABLE_NAME LIKE 'NDB%';

    从信息模式中选择*全局变量,其中变量名类似“ndb%”;

    This statement is the equivalent of the SHOW command described in the previous item, and provides almost identical output, as shown here:

    此语句与上一项中描述的show命令等效,并提供几乎相同的输出,如下所示:

    mysql> SELECT * FROM INFORMATION_SCHEMA.GLOBAL_VARIABLES
        ->   WHERE VARIABLE_NAME LIKE 'NDB%';
    +-------------------------------------+----------------+
    | VARIABLE_NAME                       | VARIABLE_VALUE |
    +-------------------------------------+----------------+
    | NDB_AUTOINCREMENT_PREFETCH_SZ       | 32             |
    | NDB_CACHE_CHECK_TIME                | 0              |
    | NDB_EXTRA_LOGGING                   | 0              |
    | NDB_FORCE_SEND                      | ON             |
    | NDB_INDEX_STAT_CACHE_ENTRIES        | 32             |
    | NDB_INDEX_STAT_ENABLE               | OFF            |
    | NDB_INDEX_STAT_UPDATE_FREQ          | 20             |
    | NDB_REPORT_THRESH_BINLOG_EPOCH_SLIP | 3              |
    | NDB_REPORT_THRESH_BINLOG_MEM_USAGE  | 10             |
    | NDB_USE_COPYING_ALTER_TABLE         | OFF            |
    | NDB_USE_EXACT_COUNT                 | ON             |
    | NDB_USE_TRANSACTIONS                | ON             |
    +-------------------------------------+----------------+
    

    Unlike the case with the SHOW command, it is possible to select individual columns. For example:

    与show命令不同,可以选择单个列。例如:

    mysql> SELECT VARIABLE_VALUE 
        ->   FROM INFORMATION_SCHEMA.GLOBAL_VARIABLES
        ->   WHERE VARIABLE_NAME = 'ndb_force_send';
    +----------------+
    | VARIABLE_VALUE |
    +----------------+
    | ON             |
    +----------------+
    

    See Section 24.11, “The INFORMATION_SCHEMA GLOBAL_VARIABLES and SESSION_VARIABLES Tables”, and Section 5.1.7, “Server System Variables”, for more information.

    有关详细信息,请参阅第24.11节“信息模式全局变量和会话变量表”和第5.1.7节“服务器系统变量”。

  • SHOW STATUS LIKE 'NDB%'

    显示类似“ndb%”的状态

    This statement shows at a glance whether or not the MySQL server is acting as a cluster SQL node, and if so, it provides the MySQL server's cluster node ID, the host name and port for the cluster management server to which it is connected, and the number of data nodes in the cluster, as shown here:

    此语句简单显示mysql服务器是否作为集群sql节点,如果是,则提供mysql服务器的集群节点id、连接到的集群管理服务器的主机名和端口以及集群中的数据节点数,如图所示:

    mysql> SHOW STATUS LIKE 'NDB%';
    +--------------------------+----------------+
    | Variable_name            | Value          |
    +--------------------------+----------------+
    | Ndb_cluster_node_id      | 10             |
    | Ndb_config_from_host     | 198.51.100.103 |
    | Ndb_config_from_port     | 1186           |
    | Ndb_number_of_data_nodes | 4              |
    +--------------------------+----------------+
    

    If the MySQL server was built with clustering support, but it is not connected to a cluster, all rows in the output of this statement contain a zero or an empty string:

    如果MySQL服务器是使用群集支持构建的,但它没有连接到群集,则此语句输出中的所有行都包含零或空字符串:

    mysql> SHOW STATUS LIKE 'NDB%';
    +--------------------------+-------+
    | Variable_name            | Value |
    +--------------------------+-------+
    | Ndb_cluster_node_id      | 0     |
    | Ndb_config_from_host     |       |
    | Ndb_config_from_port     | 0     |
    | Ndb_number_of_data_nodes | 0     |
    +--------------------------+-------+
    

    See also Section 13.7.5.35, “SHOW STATUS Syntax”.

    另见第13.7.5.35节“显示状态语法”。

  • SELECT * FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME LIKE 'NDB%';

    从information_schema.global_status中选择*其中变量名如“ndb%”;

    This statement provides similar output to the SHOW command discussed in the previous item. However, unlike the case with SHOW STATUS, it is possible using the SELECT to extract values in SQL for use in scripts for monitoring and automation purposes.

    此语句提供与前一项中讨论的show命令类似的输出。但是,与show status不同,可以使用select在sql中提取值,以便在脚本中用于监视和自动化目的。

    See Section 24.10, “The INFORMATION_SCHEMA GLOBAL_STATUS and SESSION_STATUS Tables”, for more information.

    有关更多信息,请参阅第24.10节“信息模式全局状态和会话状态表”。

You can also query the tables in the ndbinfo information database for real-time data about many NDB Cluster operations. See Section 21.5.10, “ndbinfo: The NDB Cluster Information Database”.

您还可以查询ndbinfo信息数据库中的表,以获取有关许多ndb集群操作的实时数据。见21.5.10节,“ndbinfo:ndb集群信息数据库”。

21.5.10 ndbinfo: The NDB Cluster Information Database

21.5.10.1 The ndbinfo arbitrator_validity_detail Table
21.5.10.2 The ndbinfo arbitrator_validity_summary Table
21.5.10.3 The ndbinfo blocks Table
21.5.10.4 The ndbinfo cluster_locks Table
21.5.10.5 The ndbinfo cluster_operations Table
21.5.10.6 The ndbinfo cluster_transactions Table
21.5.10.7 The ndbinfo config_nodes Table
21.5.10.8 The ndbinfo config_params Table
21.5.10.9 The ndbinfo config_values Table
21.5.10.10 The ndbinfo counters Table
21.5.10.11 The ndbinfo cpustat Table
21.5.10.12 The ndbinfo cpustat_50ms Table
21.5.10.13 The ndbinfo cpustat_1sec Table
21.5.10.14 The ndbinfo cpustat_20sec Table
21.5.10.15 The ndbinfo dict_obj_info Table
21.5.10.16 The ndbinfo dict_obj_types Table
21.5.10.17 The ndbinfo disk_write_speed_base Table
21.5.10.18 The ndbinfo disk_write_speed_aggregate Table
21.5.10.19 The ndbinfo disk_write_speed_aggregate_node Table
21.5.10.20 The ndbinfo diskpagebuffer Table
21.5.10.21 The ndbinfo error_messages Table
21.5.10.22 The ndbinfo locks_per_fragment Table
21.5.10.23 The ndbinfo logbuffers Table
21.5.10.24 The ndbinfo logspaces Table
21.5.10.25 The ndbinfo membership Table
21.5.10.26 The ndbinfo memoryusage Table
21.5.10.27 The ndbinfo memory_per_fragment Table
21.5.10.28 The ndbinfo nodes Table
21.5.10.29 The ndbinfo operations_per_fragment Table
21.5.10.30 The ndbinfo processes Table
21.5.10.31 The ndbinfo resources Table
21.5.10.32 The ndbinfo restart_info Table
21.5.10.33 The ndbinfo server_locks Table
21.5.10.34 The ndbinfo server_operations Table
21.5.10.35 The ndbinfo server_transactions Table
21.5.10.36 The ndbinfo table_distribution_status Table
21.5.10.37 The ndbinfo table_fragments Table
21.5.10.38 The ndbinfo table_info Table
21.5.10.39 The ndbinfo table_replicas Table
21.5.10.40 The ndbinfo tc_time_track_stats Table
21.5.10.41 The ndbinfo threadblocks Table
21.5.10.42 The ndbinfo threads Table
21.5.10.43 The ndbinfo threadstat Table
21.5.10.44 The ndbinfo transporters Table

ndbinfo is a database containing information specific to NDB Cluster.

ndbinfo是一个包含特定于ndb集群的信息的数据库。

This database contains a number of tables, each providing a different sort of data about NDB Cluster node status, resource usage, and operations. You can find more detailed information about each of these tables in the next several sections.

此数据库包含多个表,每个表提供有关ndb群集节点状态、资源使用情况和操作的不同类型的数据。您可以在接下来的几节中找到关于这些表中每一个表的更详细信息。

ndbinfo is included with NDB Cluster support in the MySQL Server; no special compilation or configuration steps are required; the tables are created by the MySQL Server when it connects to the cluster. You can verify that ndbinfo support is active in a given MySQL Server instance using SHOW PLUGINS; if ndbinfo support is enabled, you should see a row containing ndbinfo in the Name column and ACTIVE in the Status column, as shown here (emphasized text):

ndbinfo包含在mysql服务器的ndb集群支持中;不需要特殊的编译或配置步骤;这些表是mysql服务器在连接到集群时创建的。您可以使用show plugins验证ndbinfo支持是否在给定的mysql服务器实例中处于活动状态;如果启用了ndbinfo支持,则应在“名称”列中看到包含ndbinfo的行,在“状态”列中看到活动的行,如下所示(强调文本):

mysql> SHOW PLUGINS;
+----------------------------------+--------+--------------------+---------+---------+
| Name                             | Status | Type               | Library | License |
+----------------------------------+--------+--------------------+---------+---------+
| binlog                           | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| mysql_native_password            | ACTIVE | AUTHENTICATION     | NULL    | GPL     |
| sha256_password                  | ACTIVE | AUTHENTICATION     | NULL    | GPL     |
| MRG_MYISAM                       | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| MEMORY                           | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| CSV                              | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| MyISAM                           | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| InnoDB                           | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| INNODB_TRX                       | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_LOCKS                     | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_LOCK_WAITS                | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_CMP                       | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_CMP_RESET                 | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_CMPMEM                    | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_CMPMEM_RESET              | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_CMP_PER_INDEX             | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_CMP_PER_INDEX_RESET       | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_BUFFER_PAGE               | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_BUFFER_PAGE_LRU           | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_BUFFER_POOL_STATS         | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_TEMP_TABLE_INFO           | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_METRICS                   | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_FT_DEFAULT_STOPWORD       | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_FT_DELETED                | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_FT_BEING_DELETED          | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_FT_CONFIG                 | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_FT_INDEX_CACHE            | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_FT_INDEX_TABLE            | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_SYS_TABLES                | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_SYS_TABLESTATS            | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_SYS_INDEXES               | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_SYS_COLUMNS               | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_SYS_FIELDS                | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_SYS_FOREIGN               | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_SYS_FOREIGN_COLS          | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_SYS_TABLESPACES           | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_SYS_DATAFILES             | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| INNODB_SYS_VIRTUAL               | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| PERFORMANCE_SCHEMA               | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| ndbCluster                      | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| ndbinfo                          | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| ndb_transid_mysql_connection_map | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
| BLACKHOLE                        | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| ARCHIVE                          | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| partition                        | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
| ngram                            | ACTIVE | FTPARSER           | NULL    | GPL     |
+----------------------------------+--------+--------------------+---------+---------+
46 rows in set (0.00 sec)

You can also do this by checking the output of SHOW ENGINES for a line including ndbinfo in the Engine column and YES in the Support column, as shown here (emphasized text):

您还可以通过检查show engines的输出来执行此操作,该输出包含engine列中的ndbinfo和support列中的yes,如下所示(强调文本):

mysql> SHOW ENGINES\G
*************************** 1. row ***************************
      Engine: ndbcluster
     Support: YES
     Comment: Clustered, fault-tolerant tables
Transactions: YES
          XA: NO
  Savepoints: NO
*************************** 2. row ***************************
      Engine: CSV
     Support: YES
     Comment: CSV storage engine
Transactions: NO
          XA: NO
  Savepoints: NO
*************************** 3. row ***************************
      Engine: InnoDB
     Support: DEFAULT
     Comment: Supports transactions, row-level locking, and foreign keys
Transactions: YES
          XA: YES
  Savepoints: YES
*************************** 4. row ***************************
      Engine: BLACKHOLE
     Support: YES
     Comment: /dev/null storage engine (anything you write to it disappears)
Transactions: NO
          XA: NO
  Savepoints: NO
*************************** 5. row ***************************
      Engine: MyISAM
     Support: YES
     Comment: MyISAM storage engine
Transactions: NO
          XA: NO
  Savepoints: NO
*************************** 6. row ***************************
      Engine: MRG_MYISAM
     Support: YES
     Comment: Collection of identical MyISAM tables
Transactions: NO
          XA: NO
  Savepoints: NO
*************************** 7. row ***************************
      Engine: ARCHIVE
     Support: YES
     Comment: Archive storage engine
Transactions: NO
          XA: NO
  Savepoints: NO
*************************** 8. row ***************************
      Engine: ndbinfo
     Support: YES
     Comment: NDB Cluster system information storage engine
Transactions: NO
          XA: NO
  Savepoints: NO
*************************** 9. row ***************************
      Engine: PERFORMANCE_SCHEMA
     Support: YES
     Comment: Performance Schema
Transactions: NO
          XA: NO
  Savepoints: NO
*************************** 10. row ***************************
      Engine: MEMORY
     Support: YES
     Comment: Hash based, stored in memory, useful for temporary tables
Transactions: NO
          XA: NO
  Savepoints: NO
10 rows in set (0.00 sec)

If ndbinfo support is enabled, then you can access ndbinfo using SQL statements in mysql or another MySQL client. For example, you can see ndbinfo listed in the output of SHOW DATABASES, as shown here (emphasized text):

如果启用了ndbinfo支持,则可以使用mysql或其他mysql客户端中的sql语句访问ndbinfo。例如,可以在show数据库的输出中看到ndbinfo,如下所示(强调文本):

mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| ndbinfo            |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.04 sec)

If the mysqld process was not started with the --ndbcluster option, ndbinfo is not available and is not displayed by SHOW DATABASES. If mysqld was formerly connected to an NDB Cluster but the cluster becomes unavailable (due to events such as cluster shutdown, loss of network connectivity, and so forth), ndbinfo and its tables remain visible, but an attempt to access any tables (other than blocks or config_params) fails with Got error 157 'Connection to NDB failed' from NDBINFO.

如果mysqld进程不是用--ndbcluster选项启动的,则ndbinfo不可用,并且不由show数据库显示。如果mysqld以前连接到ndb群集,但该群集不可用(由于群集关闭、网络连接丢失等事件),ndbinfo及其表仍然可见,但尝试访问任何表(块或配置参数除外)失败,并从ndbinfo获取错误157“连接到ndb失败”。

With the exception of the blocks and config_params tables, what we refer to as ndbinfo tables are actually views generated from internal NDB tables not normally visible to the MySQL Server.

除了blocks和config_params表之外,我们所说的ndbinfo“表”实际上是由内部ndb表生成的视图,这些表通常对mysql服务器不可见。

All ndbinfo tables are read-only, and are generated on demand when queried. Because many of them are generated in parallel by the data nodes while other are specific to a given SQL node, they are not guaranteed to provide a consistent snapshot.

所有ndbinfo表都是只读的,查询时按需生成。由于其中许多是由数据节点并行生成的,而另一些是特定于给定sql节点的,因此不能保证它们提供一致的快照。

In addition, pushing down of joins is not supported on ndbinfo tables; so joining large ndbinfo tables can require transfer of a large amount of data to the requesting API node, even when the query makes use of a WHERE clause.

此外,ndbinfo表不支持下推连接;因此,连接大型ndbinfo表可能需要将大量数据传输到请求api节点,即使查询使用where子句。

ndbinfo tables are not included in the query cache. (Bug #59831)

查询缓存中不包括ndbinfo表。(错误59831)

You can select the ndbinfo database with a USE statement, and then issue a SHOW TABLES statement to obtain a list of tables, just as for any other database, like this:

您可以使用use语句选择ndbinfo数据库,然后发出show tables语句以获取表列表,就像对于任何其他数据库一样,如下所示:

mysql> USE ndbinfo;
Database changed

mysql> SHOW TABLES;
+---------------------------------+
| Tables_in_ndbinfo               |
+---------------------------------+
| arbitrator_validity_detail      |
| arbitrator_validity_summary     |
| blocks                          |
| cluster_locks                   |
| cluster_operations              |
| cluster_transactions            |
| config_nodes                    |
| config_params                   |
| config_values                   |
| counters                        |
| cpustat                         |
| cpustat_1sec                    |
| cpustat_20sec                   |
| cpustat_50ms                    |
| dict_obj_info                   |
| dict_obj_types                  |
| disk_write_speed_aggregate      |
| disk_write_speed_aggregate_node |
| disk_write_speed_base           |
| diskpagebuffer                  |
| error_messages                  |
| locks_per_fragment              |
| logbuffers                      |
| logspaces                       |
| membership                      |
| memory_per_fragment             |
| memoryusage                     |
| nodes                           |
| operations_per_fragment         |
| processes                       |
| resources                       |
| restart_info                    |
| server_locks                    |
| server_operations               |
| server_transactions             |
| table_distribution_status       |
| table_fragments                 |
| table_info                      |
| table_replicas                  |
| tc_time_track_stats             |
| threadblocks                    |
| threads                         |
| threadstat                      |
| transporters                    |
+---------------------------------+
44 rows in set (0.00 sec)

In NDB 7.5.0 (and later), all ndbinfo tables use the NDB storage engine; however, an ndbinfo entry still appears in the output of SHOW ENGINES and SHOW PLUGINS as described previously.

在ndb 7.5.0(及更高版本)中,所有ndbinfo表都使用ndb存储引擎;但是,如前所述,ndbinfo条目仍然出现在show engines和show plugins的输出中。

The config_values table was added in NDB 7.5.0.

配置值表已添加到ndb 7.5.0中。

The cpustat, cpustat_50ms, cpustat_1sec, cpustat_20sec, and threads tables were added in NDB 7.5.2.

在ndb 7.5.2中添加了cpustat、cpustat_50ms、cpustat_1sec、cpustat_20sec和threads表。

The cluster_locks, locks_per_fragment, and server_locks tables were added in NDB 7.5.3.

在ndb 7.5.3中添加了cluster_locks、locks_per_fragment和server_locks表。

The dict_obj_info, table_distribution_status, table_fragments, table_info, and table_replicas tables were added in NDB 7.5.4.

在ndb 7.5.4中添加了dict_obj_info、table_distribution_status、table_fragments、table_info和table_replications表。

The config_nodes and processes tables were added in NDB 7.5.7 and NDB 7.6.2.

在ndb 7.5.7和ndb7.6.2中添加了配置节点和进程表。

The error_messages table was added in NDB 7.6.4.

错误信息表已添加到ndb 7.6.4中。

You can execute SELECT statements against these tables, just as you would normally expect:

您可以对这些表执行select语句,就像您通常期望的那样:

mysql> SELECT * FROM memoryusage;
+---------+---------------------+--------+------------+------------+-------------+
| node_id | memory_type         | used   | used_pages | total      | total_pages |
+---------+---------------------+--------+------------+------------+-------------+
|       5 | Data memory         | 753664 |         23 | 1073741824 |       32768 |
|       5 | Index memory        | 163840 |         20 | 1074003968 |      131104 |
|       5 | Long message buffer |   2304 |          9 |   67108864 |      262144 |
|       6 | Data memory         | 753664 |         23 | 1073741824 |       32768 |
|       6 | Index memory        | 163840 |         20 | 1074003968 |      131104 |
|       6 | Long message buffer |   2304 |          9 |   67108864 |      262144 |
+---------+---------------------+--------+------------+------------+-------------+
6 rows in set (0.02 sec)

More complex queries, such as the two following SELECT statements using the memoryusage table, are possible:

更复杂的查询是可能的,例如下面两个使用memoryusage表的select语句:

mysql> SELECT SUM(used) as 'Data Memory Used, All Nodes'
     >     FROM memoryusage
     >     WHERE memory_type = 'Data memory';
+-----------------------------+
| Data Memory Used, All Nodes |
+-----------------------------+
|                        6460 |
+-----------------------------+
1 row in set (0.37 sec)

mysql> SELECT SUM(max) as 'Total IndexMemory Available'
     >     FROM memoryusage
     >     WHERE memory_type = 'Index memory';
+-----------------------------+
| Total IndexMemory Available |
+-----------------------------+
|                       25664 |
+-----------------------------+
1 row in set (0.33 sec)

ndbinfo table and column names are case sensitive (as is the name of the ndbinfo database itself). These identifiers are in lowercase. Trying to use the wrong lettercase results in an error, as shown in this example:

ndbinfo表名和列名区分大小写(ndbinfo数据库本身的名称也是如此)。这些标识符是小写的。尝试使用错误的字母会导致错误,如以下示例所示:

mysql> SELECT * FROM nodes;
+---------+--------+---------+-------------+
| node_id | uptime | status  | start_phase |
+---------+--------+---------+-------------+
|       1 |  13602 | STARTED |           0 |
|       2 |     16 | STARTED |           0 |
+---------+--------+---------+-------------+
2 rows in set (0.04 sec)

mysql> SELECT * FROM Nodes;
ERROR 1146 (42S02): Table 'ndbinfo.Nodes' doesn't exist

mysqldump ignores the ndbinfo database entirely, and excludes it from any output. This is true even when using the --databases or --all-databases option.

mysqldump完全忽略ndbinfo数据库,并将其从任何输出中排除。即使使用--databases或--all databases选项,这也是正确的。

NDB Cluster also maintains tables in the INFORMATION_SCHEMA information database, including the FILES table which contains information about files used for NDB Cluster Disk Data storage, and the ndb_transid_mysql_connection_map table, which shows the relationships between transactions, transaction coordinators, and NDB Cluster API nodes. For more information, see the descriptions of the tables or Section 21.5.11, “INFORMATION_SCHEMA Tables for NDB Cluster”.

ndb cluster还维护information_schema信息数据库中的表,包括包含ndb cluster磁盘数据存储文件信息的files表和ndb_transid_mysql_connection_映射表,后者显示事务、事务协调器和ndb cluster api节点之间的关系。有关详细信息,请参阅表的说明或“ndb cluster的信息架构表”一节。

21.5.10.1 The ndbinfo arbitrator_validity_detail Table

The arbitrator_validity_detail table shows the view that each data node in the cluster has of the arbitrator. It is a subset of the membership table.

仲裁器有效性详细信息表显示群集中每个数据节点具有仲裁器的视图。它是成员表的一个子集。

The following table provides information about the columns in the arbitrator_validity_detail table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关仲裁器有效性详细信息表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.357 Columns of the arbitrator_validity_detail table

表21.357仲裁员栏有效性详细表

Column Name Type Description
node_id integer This node's node ID
arbitrator integer Node ID of arbitrator
arb_ticket string Internal identifier used to track arbitration
arb_connected Yes or No Whether this node is connected to the arbitrator
arb_state Enumeration (see text) Arbitration state

The node ID is the same as that reported by ndb_mgm -e "SHOW".

节点id与ndb-mgm-e“show”报告的节点id相同。

All nodes should show the same arbitrator and arb_ticket values as well as the same arb_state value. Possible arb_state values are ARBIT_NULL, ARBIT_INIT, ARBIT_FIND, ARBIT_PREP1, ARBIT_PREP2, ARBIT_START, ARBIT_RUN, ARBIT_CHOOSE, ARBIT_CRASH, and UNKNOWN.

所有节点应显示相同的仲裁器和仲裁票证值以及相同的仲裁状态值。可能的arb_state值为arbit_null、arbit_init、arbit_find、arbit_prep1、arbit_prep2、arbit_start、arbit_run、arbit_choose、arbit_crash和unknown。

arb_connected shows whether the current node is connected to the arbitrator.

arb_connected显示当前节点是否连接到仲裁器。

21.5.10.2 The ndbinfo arbitrator_validity_summary Table

The arbitrator_validity_summary table provides a composite view of the arbitrator with regard to the cluster's data nodes.

仲裁器有效性摘要表提供了仲裁器相对于集群数据节点的复合视图。

The following table provides information about the columns in the arbitrator_validity_summary table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关仲裁器有效性摘要表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.358 Columns of the arbitrator_validity_summary table

表21.358仲裁员有效性汇总表列

Column Name Type Description
arbitrator integer Node ID of arbitrator
arb_ticket string Internal identifier used to track arbitration
arb_connected Yes or No Whether this arbitrator is connected to the cluster
consensus_count integer Number of data nodes that see this node as arbitrator

In normal operations, this table should have only 1 row for any appreciable length of time. If it has more than 1 row for longer than a few moments, then either not all nodes are connected to the arbitrator, or all nodes are connected, but do not agree on the same arbitrator.

在正常操作中,对于任何可感知的时间长度,此表应只有一行。如果有超过一行的时间超过几分钟,则不是所有节点都连接到仲裁器,就是所有节点都连接,但不同意使用同一仲裁器。

The arbitrator column shows the arbitrator's node ID.

仲裁器列显示仲裁器的节点ID。

arb_ticket is the internal identifier used by this arbitrator.

arb_ticket是此仲裁器使用的内部标识符。

arb_connected shows whether this node is connected to the cluster as an arbitrator.

arb_connected显示此节点是否作为仲裁器连接到群集。

21.5.10.3 The ndbinfo blocks Table

The blocks table is a static table which simply contains the names and internal IDs of all NDB kernel blocks (see NDB Kernel Blocks). It is for use by the other ndbinfo tables (most of which are actually views) in mapping block numbers to block names for producing human-readable output.

blocks表是一个静态表,它只包含所有ndb内核块的名称和内部id(参见ndb内核块)。它供其他ndbinfo表(大多数实际上是视图)用于将块号映射到块名,以生成可读输出。

The following table provides information about the columns in the blocks table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关blocks表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.359 Columns of the blocks table

表21.359块表列

Column Name Type Description
block_number integer Block number
block_name string Block name

To obtain a list of all block names, simply execute SELECT block_name FROM ndbinfo.blocks. Although this is a static table, its content can vary between different NDB Cluster releases.

要获取所有块名的列表,只需执行select block_name from ndbinfo.blocks。虽然这是一个静态表,但其内容在不同的ndb集群版本之间可能有所不同。

21.5.10.4 The ndbinfo cluster_locks Table

The cluster_locks table provides information about current lock requests holding and waiting for locks on NDB tables in an NDB Cluster, and is intended as a companion table to cluster_operations. Information obtain from the cluster_locks table may be useful in investigating stalls and deadlocks.

cluster_locks表提供有关在ndb集群中的ndb表上保持和等待锁的当前锁请求的信息,并用作cluster_操作的配套表。从cluster_locks表获得的信息可能有助于调查暂停和死锁。

The following table provides information about the columns in the cluster_locks table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关cluster\u locks表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.360 Columns of the cluster_locks table

表21.360集群锁定表的列

Column Name Type Description
node_id integer ID of reporting node
block_instance integer ID of reporting LDM instance
tableid integer ID of table containing this row
fragmentid integer ID of fragment containing locked row
rowid integer ID of locked row
transid integer Transaction ID
mode string Lock request mode
state string Lock state
detail string Whether this is first holding lock in row lock queue
op string Operation type
duration_millis integer Milliseconds spent waiting or holding lock
lock_num integer ID of lock object
waiting_for integer Waiting for lock with this ID

The table ID (tableid column) is assigned internally, and is the same as that used in other ndbinfo tables. It is also shown in the output of ndb_show_tables.

表id(tableid列)是在内部分配的,与其他ndbinfo表中使用的表id相同。它也显示在ndb_show_表的输出中。

The transaction ID (transid column) is the identifier generated by the NDB API for the transaction requesting or holding the current lock.

事务id(transid列)是ndb api为请求或持有当前锁的事务生成的标识符。

The mode column shows the lock mode; this is always one of S (indicating a shared lock) or X (an exclusive lock). If a transaction holds an exclusive lock on a given row, all other locks on that row have the same transaction ID.

mode列显示锁模式;这始终是s(表示共享锁)或x(独占锁)之一。如果事务在给定行上持有独占锁,则该行上的所有其他锁都具有相同的事务ID。

The state column shows the lock state. Its value is always one of H (holding) or W (waiting). A waiting lock request waits for a lock held by a different transaction.

state列显示锁状态。它的值总是h(等待)或w(等待)中的一个。等待锁请求等待由不同事务持有的锁。

When the detail column contains a * (asterisk character), this means that this lock is the first holding lock in the affected row's lock queue; otherwise, this column is empty. This information can be used to help identify the unique entries in a list of lock requests.

当detail列包含*(星号字符)时,这意味着此锁是受影响行的锁队列中的第一个持有锁;否则,此列为空。此信息可用于帮助标识锁请求列表中的唯一条目。

The op column shows the type of operation requesting the lock. This is always one of the values READ, INSERT, UPDATE, DELETE, SCAN, or REFRESH.

op列显示请求锁的操作类型。这始终是读取、插入、更新、删除、扫描或刷新的值之一。

The duration_millis column shows the number of milliseconds for which this lock request has been waiting or holding the lock. This is reset to 0 when a lock is granted for a waiting request.

duration_millis列显示此锁定请求等待或保持锁定的毫秒数。当为等待请求授予锁时,此值将重置为0。

The lock ID (lockid column) is unique to this node and block instance.

锁ID(lockID列)对此节点和块实例是唯一的。

The lock state is shown in the lock_state column; if this is W, the lock is waiting to be granted, and the waiting_for column shows the lock ID of the lock object this request is waiting for. Otherwise, the waiting_for column is empty. waiting_for can refer only to locks on the same row, as identified by node_id, block_instance, tableid, fragmentid, and rowid.

锁状态显示在lock_state列中;如果这是w,则锁正在等待授予,waiting_for列显示此请求正在等待的锁对象的锁ID。否则,“等待”列为空。等待的锁只能引用同一行上的锁,由node_id、block_instance、tableid、fragmentid和rowid标识。

The cluster_locks table was added in NDB 7.5.3.

在ndb 7.5.3中添加了cluster_locks表。

21.5.10.5 The ndbinfo cluster_operations Table

The cluster_operations table provides a per-operation (stateful primary key op) view of all activity in the NDB Cluster from the point of view of the local data management (LQH) blocks (see The DBLQH Block).

cluster_operations表从本地数据管理(lqh)块的角度(请参阅dblqh块)提供ndb集群中所有活动的每个操作(有状态主键操作)视图。

The following table provides information about the columns in the cluster_operations table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关群集操作表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.361 Columns of the cluster_operations table

表21.361集群操作表的列

Column Name Type Description
node_id integer Node ID of reporting LQH block
block_instance integer LQH block instance
transid integer Transaction ID
operation_type string Operation type (see text for possible values)
state string Operation state (see text for possible values)
tableid integer Table ID
fragmentid integer Fragment ID
client_node_id integer Client node ID
client_block_ref integer Client block reference
tc_node_id integer Transaction coordinator node ID
tc_block_no integer Transaction coordinator block number
tc_block_instance integer Transaction coordinator block instance

The transaction ID is a unique 64-bit number which can be obtained using the NDB API's getTransactionId() method. (Currently, the MySQL Server does not expose the NDB API transaction ID of an ongoing transaction.)

事务ID是唯一的64位数字,可以使用ndb api的getTransactionID()方法获得。(目前,mysql服务器不公开正在进行的事务的ndb api事务id。)

The operation_type column can take any one of the values READ, READ-SH, READ-EX, INSERT, UPDATE, DELETE, WRITE, UNLOCK, REFRESH, SCAN, SCAN-SH, SCAN-EX, or <unknown>.

operation_type列可以接受read、read-sh、read-ex、insert、update、delete、write、unlock、refresh、scan、scan-sh、scan-ex或中的任何一个值。

The state column can have any one of the values ABORT_QUEUED, ABORT_STOPPED, COMMITTED, COMMIT_QUEUED, COMMIT_STOPPED, COPY_CLOSE_STOPPED, COPY_FIRST_STOPPED, COPY_STOPPED, COPY_TUPKEY, IDLE, LOG_ABORT_QUEUED, LOG_COMMIT_QUEUED, LOG_COMMIT_QUEUED_WAIT_SIGNAL, LOG_COMMIT_WRITTEN, LOG_COMMIT_WRITTEN_WAIT_SIGNAL, LOG_QUEUED, PREPARED, PREPARED_RECEIVED_COMMIT, SCAN_CHECK_STOPPED, SCAN_CLOSE_STOPPED, SCAN_FIRST_STOPPED, SCAN_RELEASE_STOPPED, SCAN_STATE_USED, SCAN_STOPPED, SCAN_TUPKEY, STOPPED, TC_NOT_CONNECTED, WAIT_ACC, WAIT_ACC_ABORT, WAIT_AI_AFTER_ABORT, WAIT_ATTR, WAIT_SCAN_AI, WAIT_TUP, WAIT_TUPKEYINFO, WAIT_TUP_COMMIT, or WAIT_TUP_TO_ABORT. (If the MySQL Server is running with ndbinfo_show_hidden enabled, you can view this list of states by selecting from the ndb$dblqh_tcconnect_state table, which is normally hidden.)

状态栏可以有任何一个值:异常中止队列、中止、提交、提交、排队、提交、停止、复制、停止、复制、停止、复制、删除、Log-PositQueRead、LogyPrimeQueQueLead、LogyPrimeQueReDeWaITy信号、LogyPrimeRead、LogLogPrimtWrutTyWaITy信号、LogayQueLead、准备、SpReaDeSaveCuffeStION,SCANIONCHECKEY停止,SCANIORSTESTY停止,SCANEXPROSASEY停止,SCANESTATEY使用,SCAN停止,SCANTIOTUKEY,停止,TCYNOTHOLD连接,WAITYAACC,WAITYAACKYBORT,WAITYAI EXEXABORT,WAITHARTAFI,WAITYX SCANEAI,WAITYTUP,WAITUTUPKEY信息,WAITYTUPION提交,WAITYTUPYTOO-ABORT。(如果mysql服务器运行时启用了ndbinfo_show_hidden,则可以从通常隐藏的ndb$dblqh_tcconnect_state表中选择来查看此状态列表。)

You can obtain the name of an NDB table from its table ID by checking the output of ndb_show_tables.

可以通过检查ndb-show-u表的输出,从表id中获取ndb表的名称。

The fragid is the same as the partition number seen in the output of ndb_desc --extra-partition-info (short form -p).

fragid与ndb_desc——extra partition info(缩写-p)的输出中看到的分区号相同。

In client_node_id and client_block_ref, client refers to an NDB Cluster API or SQL node (that is, an NDB API client or a MySQL Server attached to the cluster).

在client_node_id和client_block_ref中,client指的是ndb集群api或sql节点(即ndb api客户端或连接到集群的mysql服务器)。

The block_instance and tc_block_instance column provide, respectively, the DBLQH and DBTC block instance numbers. You can use these along with the block names to obtain information about specific threads from the threadblocks table.

block_instance和tc_block_instance列分别提供dblqh和dbtc块实例号。可以将它们与块名一起使用,从threadblocks表中获取有关特定线程的信息。

21.5.10.6 The ndbinfo cluster_transactions Table

The cluster_transactions table shows information about all ongoing transactions in an NDB Cluster.

cluster_transactions表显示有关ndb集群中所有正在进行的事务的信息。

The following table provides information about the columns in the cluster_transactions table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关cluster_transactions表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.362 Columns of the cluster_transactions table

表21.362 cluster_transactions表的列

Column Name Type Description
node_id integer Node ID of transaction coordinator
block_instance integer TC block instance
transid integer Transaction ID
state string Operation state (see text for possible values)
count_operations integer Number of stateful primary key operations in transaction (includes reads with locks, as well as DML operations)
outstanding_operations integer Operations still being executed in local data management blocks
inactive_seconds integer Time spent waiting for API
client_node_id integer Client node ID
client_block_ref integer Client block reference

The transaction ID is a unique 64-bit number which can be obtained using the NDB API's getTransactionId() method. (Currently, the MySQL Server does not expose the NDB API transaction ID of an ongoing transaction.)

事务ID是唯一的64位数字,可以使用ndb api的getTransactionID()方法获得。(目前,mysql服务器不公开正在进行的事务的ndb api事务id。)

block_instance refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the threadblocks table.

block_instance是指内核块的一个实例。与块名一起,此数字可用于在threadblocks表中查找给定实例。

The state column can have any one of the values CS_ABORTING, CS_COMMITTING, CS_COMMIT_SENT, CS_COMPLETE_SENT, CS_COMPLETING, CS_CONNECTED, CS_DISCONNECTED, CS_FAIL_ABORTED, CS_FAIL_ABORTING, CS_FAIL_COMMITTED, CS_FAIL_COMMITTING, CS_FAIL_COMPLETED, CS_FAIL_PREPARED, CS_PREPARE_TO_COMMIT, CS_RECEIVING, CS_REC_COMMITTING, CS_RESTART, CS_SEND_FIRE_TRIG_REQ, CS_STARTED, CS_START_COMMITTING, CS_START_SCAN, CS_WAIT_ABORT_CONF, CS_WAIT_COMMIT_CONF, CS_WAIT_COMPLETE_CONF, CS_WAIT_FIRE_TRIG_REQ. (If the MySQL Server is running with ndbinfo_show_hidden enabled, you can view this list of states by selecting from the ndb$dbtc_apiconnect_state table, which is normally hidden.)

状态列可以有任何一个值CSA中止、CSQIONTION、CSJEngIORY发送、CSJUnEnter、CSHORIN、CSY断开连接、CSJFAILL中止、CSYFAILL中止、CSYFAILJOPEN、CSYFAIL提交、CSYFAIL完成、CSYRebug准备、CSSpReRayToToRION提交、CSX接收、CSLReqIORACTION、CSARREST、CSRADE、cs_send_fire_trig_req,cs_started,cs_start_committing,cs_start_scan,cs_wait_abort_conf,cs_wait_commit_conf,cs_wait_complete_conf,cs_wait_fire_trig_req。(如果mysql服务器运行时启用了ndbinfo_show_hidden,则可以通过从通常隐藏的ndb$dbtc_apiconnect_state表中选择来查看此状态列表。)

In client_node_id and client_block_ref, client refers to an NDB Cluster API or SQL node (that is, an NDB API client or a MySQL Server attached to the cluster).

在client_node_id和client_block_ref中,client指的是ndb集群api或sql节点(即ndb api客户端或连接到集群的mysql服务器)。

The tc_block_instance column provides the DBTC block instance number. You can use this along with the block name to obtain information about specific threads from the threadblocks table.

tc_block_instance列提供dbtc块实例号。可以将其与块名一起使用,从threadblocks表中获取有关特定线程的信息。

21.5.10.7 The ndbinfo config_nodes Table

The config_nodes table shows nodes configured in an NDB Cluster config.ini file. For each node, the table displays a row containing the node ID, the type of node (management node, data node, or API node), and the name or IP address of the host on which the node is configured to run.

配置节点表显示在ndb cluster config.ini文件中配置的节点。对于每个节点,该表显示一行,其中包含节点ID、节点类型(管理节点、数据节点或API节点)以及节点配置为在其上运行的主机的名称或IP地址。

This table does not indicate whether a given node is actually running, or whether it is currently connected to the cluster. Information about nodes connected to an NDB Cluster can be obtained from the nodes and processes table.

此表不指示给定节点是否正在实际运行,或当前是否已连接到群集。有关连接到ndb集群的节点的信息可以从nodes and processes表中获得。

The following table provides information about the columns in the config_nodes table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关配置节点表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.363 Columns of the config_params table

配置参数表的表21.363列

Column Name Type Description
node_id integer The node's ID
node_type string The type of node
node_hostname string The name or IP address of the host on which the node resides

The node_id column shows the node ID used in the config.ini file for this node; if none is specified, the node ID that would be assigned automatically to this node is displayed.

node_id列显示此节点的config.ini文件中使用的节点id;如果未指定,则显示将自动分配给此节点的节点id。

The node_type column displays one of the following three values:

“节点类型”列显示以下三个值之一:

  • MGM: Management node.

    管理节点。

  • NDB: Data node.

    ndb:数据节点。

  • API: API node; this includes SQL nodes.

    api:api节点;这包括sql节点。

The node_hostname column shows the node host as specified in the config.ini file. This can be empty for an API node, if HostName has not been set in the cluster configuration file. If HostName has not been set for a data node in the configuration file, localhost is used here. localhost is also used if HostName has not been specified for a management node.

node_hostname列显示config.ini文件中指定的节点主机。如果在群集配置文件中未设置主机名,则对于API节点,此值可以为空。如果尚未在配置文件中为数据节点设置主机名,则此处使用localhost。如果未为管理节点指定主机名,则还使用localhost。

The config_nodes table was added in NDB 7.5.7 and NDB 7.6.2.

在ndb 7.5.7和ndb7.6.2中添加了配置节点表。

21.5.10.8 The ndbinfo config_params Table

The config_params table is a static table which provides the names and internal ID numbers of and other information about NDB Cluster configuration parameters.

config_params表是一个静态表,它提供ndb集群配置参数的名称、内部id号和其他信息。

The following table provides information about the columns in the config_params table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table. This table can also be used in conjunction with the config_values table for obtaining realtime information about node configuration parameters.

下表提供了有关配置参数表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。此表还可以与配置值表一起使用,以获取有关节点配置参数的实时信息。

Table 21.364 Columns of the config_params table

表21.364配置参数表的列

Column Name Type Description
param_number integer The parameter's internal ID number
param_name string The name of the parameter
param_description string A brief description of the parameter
param_type string The parameter's data type
param_default string The parameter's default value, if any
param_min string The parameter's maximum value, if any
param_max string The parameter's minimum value, if any
param_mandatory integer This is 1 if the parameter is required, otherwise 0
param_status string Currently unused

In NDB Cluster 7.5 (and later), this table is read-only. The param_description, param_type, param_default, param_min, param_max, param_mandatory, and param_status columns were all added in NDB 7.5.0.

在ndb集群7.5(及更高版本)中,此表是只读的。param_description、param_type、param_default、param_min、param_max、param_mandatory和param_status列都添加到了ndb 7.5.0中。

Although this is a static table, its content can vary between NDB Cluster installations, since supported parameters can vary due to differences between software releases, cluster hardware configurations, and other factors.

虽然这是一个静态表,但其内容在ndb群集安装之间可能有所不同,因为受支持的参数可能因软件版本、群集硬件配置和其他因素的不同而有所不同。

21.5.10.9 The ndbinfo config_values Table

The config_values table, implemented in NDB 7.5.0, provides information about the current state of node configuration parameter values. Each row in the table corresponds to the current value of a parameter on a given node.

在ndb 7.5.0中实现的config_values表提供了有关节点配置参数值的当前状态的信息。表中的每一行对应于给定节点上参数的当前值。

Table 21.365 Columns of the config_params table

配置参数表的表21.365列

Column Name Type Description
node_id integer ID of the node in the cluster
config_param integer The parameter's internal ID number
config_value string Current value of the parameter

This table's config_param column and the config_params table's param_number column use the same parameter identifiers. By joining the two tables on these columns, you can obtain detailed information about desired node configuration parameters. The query shown here provides the current values for all parameters on each data node in the cluster, ordered by node ID and parameter name:

此表的config_param列和config_param s表的param_number列使用相同的参数标识符。通过将这些列上的两个表连接起来,可以获得有关所需节点配置参数的详细信息。此处显示的查询提供集群中每个数据节点上所有参数的当前值,按节点ID和参数名称排序:

SELECT    v.node_id AS 'Node Id',
          p.param_name AS 'Parameter',
          v.config_value AS 'Value'
FROM      config_values v
JOIN      config_params p
ON        v.config_param=p.param_number
WHERE     p.param_name NOT LIKE '\_\_%'
ORDER BY  v.node_id, p.param_name;

Partial output from the previous query when run on a small example cluster used for simple testing:

在用于简单测试的小型示例集群上运行时,上一个查询的部分输出:

+---------+------------------------------------------+----------------+
| Node Id | Parameter                                | Value          |
+---------+------------------------------------------+----------------+
|       2 | Arbitration                              | 1              |
|       2 | ArbitrationTimeout                       | 7500           |
|       2 | BackupDataBufferSize                     | 16777216       |
|       2 | BackupDataDir                            | /home/jon/data |
|       2 | BackupDiskWriteSpeedPct                  | 50             |
|       2 | BackupLogBufferSize                      | 16777216       |

...

|       3 | TotalSendBufferMemory                    | 0              |
|       3 | TransactionBufferMemory                  | 1048576        |
|       3 | TransactionDeadlockDetectionTimeout      | 1200           |
|       3 | TransactionInactiveTimeout               | 4294967039     |
|       3 | TwoPassInitialNodeRestartCopy            | 0              |
|       3 | UndoDataBuffer                           | 16777216       |
|       3 | UndoIndexBuffer                          | 2097152        |
+---------+------------------------------------------+----------------+
248 rows in set (0.02 sec)

The WHERE clause filters out parameters whose names begin with a double underscore (__); these parameters are reserved for testing and other internal uses by the NDB developers, and are not intended for use in a production NDB Cluster.

where子句筛选出名称以双下划线(\uu)开头的参数;这些参数是为ndb开发人员的测试和其他内部使用而保留的,不打算在生产ndb集群中使用。

You can obtain output that is more specific, more detailed, or both by issuing the proper queries. This example provides all types of available information about the NodeId, NoOfReplicas, HostName, DataMemory, IndexMemory, and TotalSendBufferMemory parameters as currently set for all data nodes in the cluster:

您可以通过发出适当的查询来获得更具体、更详细或两者兼而有之的输出。此示例提供有关nodeid、noofreplicas、hostname、datamemory、indexmemory和totalsendbuffermemory参数的所有类型的可用信息,这些参数是当前为群集中的所有数据节点设置的:

SELECT  p.param_name AS Name,
        v.node_id AS Node,
        p.param_type AS Type,
        p.param_default AS 'Default',
        p.param_min AS Minimum,
        p.param_max AS Maximum,
        CASE p.param_mandatory WHEN 1 THEN 'Y' ELSE 'N' END AS 'Required',
        v.config_value AS Current
FROM    config_params p
JOIN    config_values v
ON      p.param_number = v.config_param
WHERE   p. param_name
  IN ('NodeId', 'NoOfReplicas', 'HostName',
      'DataMemory', 'IndexMemory', 'TotalSendBufferMemory')\G

The output from this query when run on a small NDB Cluster with 2 data nodes used for simple testing is shown here:

在具有用于简单测试的2个数据节点的小型ndb集群上运行时,此查询的输出如下所示:

*************************** 1. row ***************************
    Name: NodeId
    Node: 2
    Type: unsigned
 Default:
 Minimum: 1
 Maximum: 48
Required: Y
 Current: 2
*************************** 2. row ***************************
    Name: HostName
    Node: 2
    Type: string
 Default: localhost
 Minimum:
 Maximum:
Required: N
 Current: 127.0.0.1
*************************** 3. row ***************************
    Name: TotalSendBufferMemory
    Node: 2
    Type: unsigned
 Default: 0
 Minimum: 262144
 Maximum: 4294967039
Required: N
 Current: 0
*************************** 4. row ***************************
    Name: NoOfReplicas
    Node: 2
    Type: unsigned
 Default: 2
 Minimum: 1
 Maximum: 4
Required: N
 Current: 2
*************************** 5. row ***************************
    Name: DataMemory
    Node: 2
    Type: unsigned
 Default: 102760448
 Minimum: 1048576
 Maximum: 1099511627776
Required: N
 Current: 524288000
*************************** 6. row ***************************
    Name: NodeId
    Node: 3
    Type: unsigned
 Default:
 Minimum: 1
 Maximum: 48
Required: Y
 Current: 3
*************************** 7. row ***************************
    Name: HostName
    Node: 3
    Type: string
 Default: localhost
 Minimum:
 Maximum:
Required: N
 Current: 127.0.0.1
*************************** 8. row ***************************
    Name: TotalSendBufferMemory
    Node: 3
    Type: unsigned
 Default: 0
 Minimum: 262144
 Maximum: 4294967039
Required: N
 Current: 0
*************************** 9. row ***************************
    Name: NoOfReplicas
    Node: 3
    Type: unsigned
 Default: 2
 Minimum: 1
 Maximum: 4
Required: N
 Current: 2
*************************** 10. row ***************************
    Name: DataMemory
    Node: 3
    Type: unsigned
 Default: 102760448
 Minimum: 1048576
 Maximum: 1099511627776
Required: N
 Current: 524288000
10 rows in set (0.01 sec)

21.5.10.10 The ndbinfo counters Table

The counters table provides running totals of events such as reads and writes for specific kernel blocks and data nodes. Counts are kept from the most recent node start or restart; a node start or restart resets all counters on that node. Not all kernel blocks have all types of counters.

counters表提供特定内核块和数据节点的读取和写入等事件的运行总数。从最近的节点启动或重新启动开始保留计数;节点启动或重新启动将重置该节点上的所有计数器。并非所有内核块都有所有类型的计数器。

The following table provides information about the columns in the counters table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关计数器表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.366 Columns of the counters table

表21.366计数器表的列

Column Name Type Description
node_id integer The data node ID
block_name string Name of the associated NDB kernel block (see NDB Kernel Blocks).
block_instance integer Block instance
counter_id integer The counter's internal ID number; normally an integer between 1 and 10, inclusive.
counter_name string The name of the counter. See text for names of individual counters and the NDB kernel block with which each counter is associated.
val integer The counter's value

Each counter is associated with a particular NDB kernel block.

每个计数器都与特定的ndb内核块相关联。

The OPERATIONS counter is associated with the DBLQH (local query handler) kernel block (see The DBLQH Block). A primary-key read counts as one operation, as does a primary-key update. For reads, there is one operation in DBLQH per operation in DBTC. For writes, there is one operation counted per replica.

操作计数器与dblqh(本地查询处理程序)内核块相关联(请参阅dblqh块)。主键读取和主键更新都算作一个操作。对于读取,dbtc中的每个操作在dblqh中都有一个操作。对于写操作,每个副本有一个操作计数。

The ATTRINFO, TRANSACTIONS, COMMITS, READS, LOCAL_READS, SIMPLE_READS, WRITES, LOCAL_WRITES, ABORTS, TABLE_SCANS, and RANGE_SCANS counters are associated with the DBTC (transaction co-ordinator) kernel block (see The DBTC Block).

attrinfo、transactions、commits、reads、local_reads、simple_reads、writes、local_writes、aborts、table_scans和range_scans计数器与dbtc(事务协调器)内核块关联(请参阅dbtc块)。

LOCAL_WRITES and LOCAL_READS are primary-key operations using a transaction coordinator in a node that also holds the primary replica of the record.

本地写入和本地读取是在节点中使用事务协调器的主键操作,该节点还保存记录的主副本。

The READS counter includes all reads. LOCAL_READS includes only those reads of the primary replica on the same node as this transaction coordinator. SIMPLE_READS includes only those reads in which the read operation is the beginning and ending operation for a given transaction. Simple reads do not hold locks but are part of a transaction, in that they observe uncommitted changes made by the transaction containing them but not of any other uncommitted transactions. Such reads are simple from the point of view of the TC block; since they hold no locks they are not durable, and once DBTC has routed them to the relevant LQH block, it holds no state for them.

读取计数器包括所有读取。本地读取仅包括与此事务协调器位于同一节点上的主副本的那些读取。简单读取仅包括那些读取操作是给定事务的开始和结束操作的读取。简单读取不持有锁,而是事务的一部分,因为它们观察到包含锁的事务所做的未提交更改,而不是任何其他未提交事务的更改。从tc块的角度来看,这样的读取是“简单的”;因为它们不持有锁,所以它们是不持久的,一旦dbtc将它们路由到相关的lqh块,它就不持有它们的状态。

ATTRINFO keeps a count of the number of times an interpreted program is sent to the data node. See NDB Protocol Messages, for more information about ATTRINFO messages in the NDB kernel.

attrinfo记录解释程序发送到数据节点的次数。有关ndb内核中attrinfo消息的更多信息,请参阅ndb协议消息。

The LOCAL_TABLE_SCANS_SENT, READS_RECEIVED, PRUNED_RANGE_SCANS_RECEIVED, RANGE_SCANS_RECEIVED, LOCAL_READS_SENT, CONST_PRUNED_RANGE_SCANS_RECEIVED, LOCAL_RANGE_SCANS_SENT, REMOTE_READS_SENT, REMOTE_RANGE_SCANS_SENT, READS_NOT_FOUND, SCAN_BATCHES_RETURNED, TABLE_SCANS_RECEIVED, and SCAN_ROWS_RETURNED counters are associated with the DBSPJ (select push-down join) kernel block (see The DBSPJ Block).

本地表扫描发送,读取接收,删除范围扫描接收,范围扫描接收,本地读取发送,常量删除范围扫描接收,本地范围扫描发送,远程读取发送,远程范围扫描发送,读取未找到,扫描批返回,表扫描接收,而scan_rows_返回的计数器与dbspj(select push down join)内核块(参见dbspj块)相关联。

The block_name and block_instance columns provide, respectively, the applicable NDB kernel block name and instance number. You can use these to obtain information about specific threads from the threadblocks table.

block_name和block_instance列分别提供适用的ndb内核块名和实例号。可以使用它们从threadblocks表中获取有关特定线程的信息。

A number of counters provide information about transporter overload and send buffer sizing when troubleshooting such issues. For each LQH instance, there is one instance of each counter in the following list:

在解决此类问题时,许多计数器提供有关传输程序过载和发送缓冲区大小的信息。对于每个LQH实例,以下列表中每个计数器都有一个实例:

  • LQHKEY_OVERLOAD: Number of primary key requests rejected at the LQH block instance due to transporter overload

    LQHKEY U重载:由于传输程序重载,LQH块实例拒绝的主键请求数

  • LQHKEY_OVERLOAD_TC: Count of instances of LQHKEY_OVERLOAD where the TC node transporter was overloaded

    lqhkey_overload_tc:tc节点传输程序重载的lqhkey_overload实例的计数

  • LQHKEY_OVERLOAD_READER: Count of instances of LQHKEY_OVERLOAD where the API reader (reads only) node was overloaded.

    lqhkey_overload_reader:lqhkey_overload实例的计数,其中api reader(只读)节点被重载。

  • LQHKEY_OVERLOAD_NODE_PEER: Count of instances of LQHKEY_OVERLOAD where the next backup data node (writes only) was overloaded

    lqhkey_overload_node_peer:下一个备份数据节点(仅写)被重载的lqhkey_overload实例的计数

  • LQHKEY_OVERLOAD_SUBSCRIBER: Count of instances of LQHKEY_OVERLOAD where a event subscriber (writes only) was overloaded.

    lqhkey_overload_subscriber:事件订阅服务器(仅写)重载的lqhkey_overload实例的计数。

  • LQHSCAN_SLOWDOWNS: Count of instances where a fragment scan batch size was reduced due to scanning API transporter overload.

    lqhscan_slowdowns:由于扫描API传输程序过载而减少碎片扫描批大小的实例计数。

21.5.10.11 The ndbinfo cpustat Table

The cpustat table provides per-thread CPU statistics gathered each second, for each thread running in the NDB kernel.

cpustat表为ndb内核中运行的每个线程提供每秒收集的每个线程cpu统计信息。

The following table provides information about the columns in the cpustat table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关cpustat表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.367 Columns of the cpustat table

cpustat表的表21.367列

Column Name Type Description
node_id integer ID of the node where the thread is running
thr_no integer Thread ID (specific to this node)
OS_user integer OS user time
OS_system integer OS system time
OS_idle integer OS idle time
thread_exec integer Thread execution time
thread_sleeping integer Thread sleep time
thread_send integer Thread send time
thread_buffer_full integer Thread buffer full time
elapsed_time integer Elapsed time

This table was added in NDB 7.5.2.

此表是在ndb 7.5.2中添加的。

21.5.10.12 The ndbinfo cpustat_50ms Table

The cpustat_50ms table provides raw, per-thread CPU data obtained each 50 milliseconds for each thread running in the NDB kernel.

cpustat_50ms表为在ndb内核中运行的每个线程提供每50毫秒获得的每个线程的原始cpu数据。

Like cpustat_1sec and cpustat_20sec, this table shows 20 measurement sets per thread, each referencing a period of the named duration. Thus, cpsustat_50ms provides 1 second of history.

与cpustat_1sec和cpustat_20sec类似,此表显示每个线程20个度量集,每个度量集引用指定持续时间的一个时段。因此,cpsustat ms提供了1秒的历史。

The following table provides information about the columns in the cpustat_50ms table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关cpustat_50ms表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.368 Columns of the cpustat_50ms table

表21.368 cpustat ms表的列

Column Name Type Description
node_id integer ID of the node where the thread is running
thr_no integer Thread ID (specific to this node)
OS_user_time integer OS user time
OS_system_time integer OS system time
OS_idle_time integer OS idle time
exec_time integer Thread execution time
sleep_time integer Thread sleep time
send_time integer Thread send time
buffer_full_time integer Thread buffer full time
elapsed_time integer Elapsed time

This table was added in NDB 7.5.2.

此表是在ndb 7.5.2中添加的。

21.5.10.13 The ndbinfo cpustat_1sec Table

The cpustat-1sec table provides raw, per-thread CPU data obtained each second for each thread running in the NDB kernel.

cpustat-1sec表为运行在ndb内核中的每个线程提供每秒获得的原始、每线程cpu数据。

Like cpustat_50ms and cpustat_20sec, this table shows 20 measurement sets per thread, each referencing a period of the named duration. Thus, cpsustat_1sec provides 20 seconds of history.

与cpustat_50ms和cpustat_20sec类似,此表显示每个线程20个度量集,每个度量集引用指定持续时间的一个时段。因此,cpsustat_1sec提供了20秒的历史。

The following table provides information about the columns in the cpustat_1sec table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关cpustat_1sec表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.369 Columns of the cpustat_1sec table

表21.369 cpustat_1sec表的列

Column Name Type Description
node_id integer ID of the node where the thread is running
thr_no integer Thread ID (specific to this node)
OS_user_time integer OS user time
OS_system_time integer OS system time
OS_idle_time integer OS idle time
exec_time integer Thread execution time
sleep_time integer Thread sleep time
send_time integer Thread send time
buffer_full_time integer Thread buffer full time
elapsed_time integer Elapsed time

This table was added in NDB 7.5.2.

此表是在ndb 7.5.2中添加的。

21.5.10.14 The ndbinfo cpustat_20sec Table

The cpustat_20sec table provides raw, per-thread CPU data obtained each 20 seconds, for each thread running in the NDB kernel.

cpustat_20sec表为在ndb内核中运行的每个线程提供每20秒获得的每个线程的原始cpu数据。

Like cpustat_50ms and cpustat_1sec, this table shows 20 measurement sets per thread, each referencing a period of the named duration. Thus, cpsustat_20sec provides 400 seconds of history.

与cpustat_50ms和cpustat_1sec类似,此表显示每个线程20个度量集,每个度量集引用指定持续时间的一个时段。因此,cpsustat sec提供了400秒的历史。

The following table provides information about the columns in the cpustat_20sec table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关cpustat sec表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.370 Columns of the cpustat_20sec table

表21.370 cpustat sec表的列

Column Name Type Description
node_id integer ID of the node where the thread is running
thr_no integer Thread ID (specific to this node)
OS_user_time integer OS user time
OS_system_time integer OS system time
OS_idle_time integer OS idle time
exec_time integer Thread execution time
sleep_time integer Thread sleep time
send_time integer Thread send time
buffer_full_time integer Thread buffer full time
elapsed_time integer Elapsed time

This table was added in NDB 7.5.2.

此表是在ndb 7.5.2中添加的。

21.5.10.15 The ndbinfo dict_obj_info Table

The dict_obj_info table provides information about NDB data dictionary (DICT) objects such as tables and indexes. (The dict_obj_types table can be queried for a list of all the types.) This information includes the object's type, state, parent object (if any), and fully qualified name.

dict_obj_info表提供有关ndb数据字典(dict)对象(如表和索引)的信息。(可以查询dict_obj_type s表以获得所有类型的列表。)此信息包括对象的类型、状态、父对象(如果有)和完全限定名。

The following table provides information about the columns in the dict_obj_info table. For each column, the table shows the name, data type, and a brief description.

下表提供了有关dict_obj_info表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。

Table 21.371 Columns of the dict_obj_info table

表21.371 dict_obj_info表的列

Column Name Type Description
type integer Type of DICT object; join on dict_obj_types to obtain the name
id integer Object identifier
version integer Object version
state integer Object state
parent_obj_type integer Parent object's type (a dict_obj_types type ID); 0 indicates that the object has no parent
parent_obj_id integer Parent object ID (such as a base table); 0 indicates that the object has no parent
fq_name string Fully qualified object name; for a table, this has the form database_name/def/table_name, for a primary key, the form is sys/def/table_id/PRIMARY, and for a unique key it is sys/def/table_id/uk_name$unique

This table was added in NDB 7.5.4.

此表是在ndb 7.5.4中添加的。

21.5.10.16 The ndbinfo dict_obj_types Table

The dict_obj_types table is a static table listing possible dictionary object types used in the NDB kernel. These are the same types defined by Object::Type in the NDB API.

dict_obj_types表是一个静态表,列出了ndb内核中可能使用的字典对象类型。这些是由ndb api中的object::type定义的相同类型。

The following table provides information about the columns in the dict_obj_types table. For each column, the table shows the name, data type, and a brief description.

下表提供了有关dict_obj_types表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。

Table 21.372 Columns of the dict_obj_types table

表21.372 dict_obj_types表的列

Column Name Type Description
type_id integer The type ID for this type
type_name string The name of this type

21.5.10.17 The ndbinfo disk_write_speed_base Table

The disk_write_speed_base table provides base information about the speed of disk writes during LCP, backup, and restore operations.

disk_write_speed_base表提供有关LCP、备份和还原操作期间磁盘写入速度的基本信息。

The following table provides information about the columns in the disk_write_speed_base table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供有关磁盘写入速度表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.373 Columns of the disk_write_speed_base table

表21.373磁盘列写入速度表

Column Name Type Description
node_id integer Node ID of this node
thr_no integer Thread ID of this LDM thread
millis_ago integer Milliseconds since this reporting period ended
millis_passed integer Milliseconds elapsed in this reporting period
backup_lcp_bytes_written integer Number of bytes written to disk by local checkpoints and backup processes during this period
redo_bytes_written integer Number of bytes written to REDO log during this period
target_disk_write_speed integer Actual speed of disk writes per LDM thread (base data)

21.5.10.18 The ndbinfo disk_write_speed_aggregate Table

The disk_write_speed_aggregate table provides aggregated information about the speed of disk writes during LCP, backup, and restore operations.

disk_write_speed_aggregate表提供有关LCP、备份和还原操作期间磁盘写入速度的聚合信息。

The following table provides information about the columns in the disk_write_speed_aggregate table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供有关磁盘写入速度聚合表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.374 Columns in the disk_write_speed_aggregate table

表21.374磁盘中的列_write_speed_aggregate table

Column Name Type Description
node_id integer Node ID of this node
thr_no integer Thread ID of this LDM thread
backup_lcp_speed_last_sec integer Number of bytes written to disk by backup and LCP processes in the last second
redo_speed_last_sec integer Number of bytes written to REDO log in the last second
backup_lcp_speed_last_10sec integer Number of bytes written to disk by backup and LCP processes per second, averaged over the last 10 seconds
redo_speed_last_10sec integer Number of bytes written to REDO log per second, averaged over the last 10 seconds
std_dev_backup_lcp_speed_last_10sec integer Standard deviation in number of bytes written to disk by backup and LCP processes per second, averaged over the last 10 seconds
std_dev_redo_speed_last_10sec integer Standard deviation in number of bytes written to REDO log per second, averaged over the last 10 seconds
backup_lcp_speed_last_60sec integer Number of bytes written to disk by backup and LCP processes per second, averaged over the last 60 seconds
redo_speed_last_60sec integer Number of bytes written to REDO log per second, averaged over the last 10 seconds
std_dev_backup_lcp_speed_last_60sec integer Standard deviation in number of bytes written to disk by backup and LCP processes per second, averaged over the last 60 seconds
std_dev_redo_speed_last_60sec integer Standard deviation in number of bytes written to REDO log per second, averaged over the last 60 seconds
slowdowns_due_to_io_lag integer Number of seconds since last node start that disk writes were slowed due to REDO log I/O lag
slowdowns_due_to_high_cpu integer Number of seconds since last node start that disk writes were slowed due to high CPU usage
disk_write_speed_set_to_min integer Number of seconds since last node start that disk write speed was set to minimum
current_target_disk_write_speed integer Actual speed of disk writes per LDM thread (aggregated)

21.5.10.19 The ndbinfo disk_write_speed_aggregate_node Table

The disk_write_speed_aggregate_node table provides aggregated information per node about the speed of disk writes during LCP, backup, and restore operations.

disk_write_speed_aggregate_node表为每个节点提供有关LCP、备份和还原操作期间磁盘写入速度的聚合信息。

The following table provides information about the columns in the disk_write_speed_aggregate_node table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供有关磁盘写入速度聚集节点表中的列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.375 Columns of the disk_write_speed_aggregate_node table

表21.375磁盘列写入速度聚合节点表

Column Name Type Description
node_id integer Node ID of this node
backup_lcp_speed_last_sec integer Number of bytes written to disk by backup and LCP processes in the last second
redo_speed_last_sec integer Number of bytes written to REDO log in the last second
backup_lcp_speed_last_10sec integer Number of bytes written to disk by backup and LCP processes per second, averaged over the last 10 seconds
redo_speed_last_10sec integer Number of bytes written to REDO log per second, averaged over the last 10 seconds
backup_lcp_speed_last_60sec integer Number of bytes written to disk by backup and LCP processes per second, averaged over the last 60 seconds
redo_speed_last_60sec integer Number of bytes written to disk by backup and LCP processes per second, averaged over the last 60 seconds

21.5.10.20 The ndbinfo diskpagebuffer Table

The diskpagebuffer table provides statistics about disk page buffer usage by NDB Cluster Disk Data tables.

disk page buffer表提供有关ndb群集磁盘数据表使用磁盘页缓冲区的统计信息。

The following table provides information about the columns in the diskpagebuffer table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关diskpagebuffer表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.376 Columns of the diskpagebuffer table

diskpagebuffer表的表21.376列

Column Name Type Description
node_id integer The data node ID
block_instance integer Block instance
pages_written integer Number of pages written to disk.
pages_written_lcp integer Number of pages written by local checkpoints.
pages_read integer Number of pages read from disk
log_waits integer Number of page writes waiting for log to be written to disk
page_requests_direct_return integer Number of requests for pages that were available in buffer
page_requests_wait_queue integer Number of requests that had to wait for pages to become available in buffer
page_requests_wait_io integer Number of requests that had to be read from pages on disk (pages were unavailable in buffer)

You can use this table with NDB Cluster Disk Data tables to determine whether DiskPageBufferMemory is sufficiently large to allow data to be read from the buffer rather from disk; minimizing disk seeks can help improve performance of such tables.

可以将此表与ndb cluster disk data tables一起使用,以确定diskpagebuffermemory是否足够大,从而允许从缓冲区而不是磁盘读取数据;最小化磁盘查找有助于提高此类表的性能。

You can determine the proportion of reads from DiskPageBufferMemory to the total number of reads using a query such as this one, which obtains this ratio as a percentage:

您可以使用这样一个查询来确定从diskpagebuffermemory读取的内容占总读取次数的比例,该查询以百分比形式获取此比例:

SELECT
  node_id,
  100 * page_requests_direct_return /
    (page_requests_direct_return + page_requests_wait_io)
      AS hit_ratio
FROM ndbinfo.diskpagebuffer;

The result from this query should be similar to what is shown here, with one row for each data node in the cluster (in this example, the cluster has 4 data nodes):

此查询的结果应与此处所示类似,集群中的每个数据节点对应一行(在本例中,集群有4个数据节点):

+---------+-----------+
| node_id | hit_ratio |
+---------+-----------+
|       5 |   97.6744 |
|       6 |   97.6879 |
|       7 |   98.1776 |
|       8 |   98.1343 |
+---------+-----------+
4 rows in set (0.00 sec)

hit_ratio values approaching 100% indicate that only a very small number of reads are being made from disk rather than from the buffer, which means that Disk Data read performance is approaching an optimum level. If any of these values are less than 95%, this is a strong indicator that the setting for DiskPageBufferMemory needs to be increased in the config.ini file.

hit_ratio值接近100%表示只有极少量的读取是从磁盘而不是从缓冲区进行的,这意味着磁盘数据读取性能接近最佳水平。如果这些值中的任何一个小于95%,则这是需要在config.ini文件中增加diskpagebuffermemory设置的强烈指示。

Note

A change in DiskPageBufferMemory requires a rolling restart of all of the cluster's data nodes before it takes effect.

diskpagebuffermemory中的更改需要在生效之前滚动重新启动集群的所有数据节点。

block_instance refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the threadblocks table. Using this information, you can obtain information about disk page buffer metrics relating to individual threads; an example query using LIMIT 1 to limit the output to a single thread is shown here:

block_instance是指内核块的一个实例。与块名一起,此数字可用于在threadblocks表中查找给定实例。使用此信息,您可以获取有关单个线程的磁盘页缓冲区度量的信息;下面显示了一个使用limit 1将输出限制为单个线程的示例查询:

mysql> SELECT
     >   node_id, thr_no, block_name, thread_name, pages_written,
     >   pages_written_lcp, pages_read, log_waits,
     >   page_requests_direct_return, page_requests_wait_queue,
     >   page_requests_wait_io
     > FROM ndbinfo.diskpagebuffer
     >   INNER JOIN ndbinfo.threadblocks USING (node_id, block_instance)
     >   INNER JOIN ndbinfo.threads USING (node_id, thr_no)
     > WHERE block_name = 'PGMAN' LIMIT 1\G
*************************** 1. row ***************************
                    node_id: 1
                     thr_no: 1
                 block_name: PGMAN
                thread_name: rep
              pages_written: 0
          pages_written_lcp: 0
                 pages_read: 1
                  log_waits: 0
page_requests_direct_return: 4
   page_requests_wait_queue: 0
      page_requests_wait_io: 1
1 row in set (0.01 sec)

21.5.10.21 The ndbinfo error_messages Table

The error_messages table provides information about

错误信息表提供有关

The following table provides information about the columns in the error_messages table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关错误消息表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.377 Columns of the error_messages table

表21.377错误信息表的列

Column Name Type Description
error_code integer Numeric error code
error_description string Description of error
error_status string Error status code
error_classification integer Error classification code

error_code is a numeric NDB error code. This is the same error code that can be supplied to ndb_perror or perror --ndb.

错误代码是一个数字的ndb错误代码。这与可以提供给ndb_perror或perror--ndb的错误代码相同。

error_description provides a basic description of the condition causing the error.

错误描述提供导致错误的条件的基本描述。

The error_status column provides status information relating to the error. Possible values for this column are listed here:

“错误状态”列提供与错误相关的状态信息。此列的可能值如下所示:

  • No error

    没有错误

  • Illegal connect string

    非法连接字符串

  • Illegal server handle

    非法服务器句柄

  • Illegal reply from server

    来自服务器的非法答复

  • Illegal number of nodes

    非法节点数

  • Illegal node status

    非法节点状态

  • Out of memory

    内存不足

  • Management server not connected

    管理服务器未连接

  • Could not connect to socket

    无法连接到套接字

  • Start failed

    启动失败

  • Stop failed

    停止失败

  • Restart failed

    重新启动失败

  • Could not start backup

    无法启动备份

  • Could not abort backup

    无法中止备份

  • Could not enter single user mode

    无法进入单用户模式

  • Could not exit single user mode

    无法退出单用户模式

  • Failed to complete configuration change

    无法完成配置更改

  • Failed to get configuration

    无法获取配置

  • Usage error

    使用错误

  • Success

    成功

  • Permanent error

    永久性错误

  • Temporary error

    暂时性错误

  • Unknown result

    未知结果

  • Temporary error, restart node

    临时错误,重新启动节点

  • Permanent error, external action needed

    永久性错误,需要外部操作

  • Ndbd file system error, restart node initial

    ndbd文件系统错误,重新启动节点初始值

  • Unknown

    未知

The error_classification column shows the error classification. See NDB Error Classifications, for information about classification codes and their meanings.

错误分类列显示错误分类。有关分类代码及其含义的信息,请参见ndb错误分类。

The error_messages table was added in NDB 7.6.4.

错误信息表已添加到ndb 7.6.4中。

21.5.10.22 The ndbinfo locks_per_fragment Table

The locks_per_fragment table provides information about counts of lock claim requests, and the outcomes of these requests on a per-fragment basis, serving as a companion table to operations_per_fragment and memory_per_fragment. This table also shows the total time spent waiting for locks successfully and unsuccessfully since fragment or table creation, or since the most recent restart.

locks_per_u fragment表提供了关于锁声明请求计数的信息,以及这些请求在每个片段基础上的结果,作为每个片段的操作和内存片段的配套表。此表还显示自碎片或表创建以来,或自最近的重新启动以来,等待锁成功和失败的总时间。

The following table provides information about the columns in the locks_per_fragment table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关locks_per_fragment表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.378 Columns of the locks_per_fragment table

表21.378锁列/碎片表

Column Name Type Description
fq_name string Fully qualified table name
parent_fq_name string Fully qualified name of parent object
type string Table type; see text for possible values
table_id integer Table ID
node_id integer Reporting node ID
block_instance integer LDM instance ID
fragment_num integer Fragment identifier
ex_req integer Exclusive lock requests started
ex_imm_ok integer Exclusive lock requests immediately granted
ex_wait_ok integer Exclusive lock requests granted following wait
ex_wait_fail integer Exclusive lock requests not granted
sh_req integer Shared lock requests started
sh_imm_ok integer Shared lock requests immediately granted
sh_wait_ok integer Shared lock requests granted following wait
sh_wait_fail integer Shared lock requests not granted
wait_ok_millis integer Time spent waiting for lock requests that were granted, in milliseconds
wait_fail_millis integer Time spent waiting for lock requests that failed, in milliseconds

block_instance refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the threadblocks table.

block_instance是指内核块的一个实例。与块名一起,此数字可用于在threadblocks表中查找给定实例。

fq_name is a fully qualified database object name in database/schema/name format, such as test/def/t1 or sys/def/10/b$unique.

fq_name是一个完全限定的数据库对象名,格式为database/schema/name,例如test/def/t1或sys/def/10/b$unique。

parent_fq_name is the fully qualified name of this object's parent object (table).

parent_fq_name是此对象的父对象(表)的完全限定名。

table_id is the table's internal ID generated by NDB. This is the same internal table ID shown in other ndbinfo tables; it is also visible in the output of ndb_show_tables.

table_id是由ndb生成的表的内部id。这与其他ndbinfo表中显示的内部表id相同;在ndb_show_表的输出中也可见。

The type column shows the type of table. This is always one of System table, User table, Unique hash index, Hash index, Unique ordered index, Ordered index, Hash index trigger, Subscription trigger, Read only constraint, Index trigger, Reorganize trigger, Tablespace, Log file group, Data file, Undo file, Hash map, Foreign key definition, Foreign key parent trigger, Foreign key child trigger, or Schema transaction.

type列显示表的类型。这始终是系统表、用户表、唯一哈希索引、哈希索引、唯一顺序索引、顺序索引、哈希索引触发器、订阅触发器、只读约束、索引触发器、重新组织触发器、表空间、日志文件组、数据文件、撤消文件、哈希映射、外键定义、外键父触发器之一,外键子触发器或架构事务。

The values shown in all of the columns ex_req, ex_req_imm_ok, ex_wait_ok, ex_wait_fail, sh_req, sh_req_imm_ok, sh_wait_ok, and sh_wait_fail represent cumulative numbers of requests since the table or fragment was created, or since the last restart of this node, whichever of these occurred later. This is also true for the time values shown in the wait_ok_millis and wait_fail_millis columns.

所有列ex_req、ex_req_imm_ok、ex_wait_ok、ex_wait_fail、sh_req、sh_req_imm_ok、sh_wait_ok和sh_wait_fail中显示的值表示自创建表或片段以来或自上次重新启动此节点以来(以较晚发生的为准)的累计请求数。wait-ok-millis和wait-fail-millis列中显示的时间值也是如此。

Every lock request is considered either to be in progress, or to have completed in some way (that is, to have succeeded or failed). This means that the following relationships are true:

每个锁请求都被视为正在进行中,或以某种方式完成(即成功或失败)。这意味着以下关系是正确的:

ex_req >= (ex_req_imm_ok + ex_wait_ok + ex_wait_fail)

sh_req >= (sh_req_imm_ok + sh_wait_ok + sh_wait_fail)

The number of requests currently in progress is the current number of incomplete requests, which can be found as shown here:

当前正在处理的请求数是当前未完成的请求数,可以在此处找到,如下所示:

[exclusive lock requests in progress] =
    ex_req - (ex_req_imm_ok + ex_wait_ok + ex_wait_fail)

[shared lock requests in progress] =
    sh_req - (sh_req_imm_ok + sh_wait_ok + sh_wait_fail)

A failed wait indicates an aborted transaction, but the abort may or may not be caused by a lock wait timeout. You can obtain the total number of aborts while waiting for locks as shown here:

失败的等待表示事务已中止,但中止可能是锁等待超时引起的,也可能不是。您可以获取等待锁定时中止的总数,如下所示:

[aborts while waiting for locks] = ex_wait_fail + sh_wait_fail

The locks_per_fragment table was added in NDB 7.5.3.

在ndb 7.5.3中添加了locks_per_片段表。

21.5.10.23 The ndbinfo logbuffers Table

The logbuffer table provides information on NDB Cluster log buffer usage.

log buffer表提供有关ndb集群日志缓冲区使用情况的信息。

The following table provides information about the columns in the logbuffers table. For each column, the table shows the name, data type, and a brief description.

下表提供了有关logbuffers表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。

Table 21.379 Columns in the logbuffers table

表21.379 logbuffers表中的列

Column Name Type Description
node_id integer The ID of this data node.
log_type string Type of log. Prior to NDB 7.6.6, one of: REDO or DD-UNDO. In NDB 7.6.6 or later, one of: REDO, DD-UNDO, BACKUP-DATA, or BACKUP-LOG.
log_id integer The log ID.
log_part integer The log part number.
total integer Total space available for this log.
used integer Space used by this log.

Beginning with NDB 7.6.6, logbuffers table rows reflecting two additional log types are available when performing an NDB backup. One of these rows has the log type BACKUP-DATA, which shows the amount of data buffer used during backup to copy fragments to backup files. The other row has the log type BACKUP-LOG, which displays the amount of log buffer used during the backup to record changes made after the backup has started. One each of these log_type rows is shown in the logbuffers table for each data node in the cluster. These rows are not present unless an NDB backup is currently being performed. (Bug #25822988)

从ndb 7.6.6开始,在执行ndb备份时,可以使用反映两种附加日志类型的logbuffers表行。其中一行的日志类型为backup-data,它显示备份期间用于将片段复制到备份文件的数据缓冲区的数量。另一行的日志类型为backup-log,它显示备份期间用于记录备份启动后所做更改的日志缓冲区的数量。集群中每个数据节点的logbuffers表中都显示了这些日志类型行中的一行。除非当前正在执行ndb备份,否则这些行不存在。(错误25822988)

21.5.10.24 The ndbinfo logspaces Table

This table provides information about NDB Cluster log space usage.

此表提供有关ndb群集日志空间使用情况的信息。

The following table provides information about the columns in the logspaces table. For each column, the table shows the name, data type, and a brief description.

下表提供了有关logspaces表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。

Table 21.380 Columns in the logspaces table

表21.380 logspaces表中的列

Column Name Type Description
node_id integer The ID of this data node.
log_type string Type of log; one of: REDO or DD-UNDO.
log_id integer The log ID.
log_part integer The log part number.
total integer Total space available for this log.
used integer Space used by this log.

21.5.10.25 The ndbinfo membership Table

The membership table describes the view that each data node has of all the others in the cluster, including node group membership, president node, arbitrator, arbitrator successor, arbitrator connection states, and other information.

成员资格表描述了每个数据节点在集群中拥有的所有其他节点的视图,包括节点组成员资格、主席节点、仲裁器、仲裁器继承者、仲裁器连接状态和其他信息。

The following table provides information about the columns in the membership table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关成员资格表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.381 Columns of the membership table

表21.381成员表列

Column Name Type Description
node_id integer This node's node ID
group_id integer Node group to which this node belongs
left node integer Node ID of the previous node
right_node integer Node ID of the next node
president integer President's node ID
successor integer Node ID of successor to president
succession_order integer Order in which this node succeeds to presidency
Conf_HB_order integer -
arbitrator integer Node ID of arbitrator
arb_ticket string Internal identifier used to track arbitration
arb_state Enumeration (see text) Arbitration state
arb_connected Yes or No Whether this node is connected to the arbitrator
connected_rank1_arbs List of node IDs Connected arbitrators of rank 1
connected_rank2_arbs List of node IDs Connected arbitrators of rank 1

The node ID and node group ID are the same as reported by ndb_mgm -e "SHOW".

节点id和节点组id与ndb-mgm-e“show”报告的相同。

left_node and right_node are defined in terms of a model that connects all data nodes in a circle, in order of their node IDs, similar to the ordering of the numbers on a clock dial, as shown here:

左_节点和右戋节点是根据一个模型定义的,该模型按照节点id的顺序将所有数据节点连接在一个圆圈中,类似于时钟拨号盘上的数字顺序,如下所示:

Figure 21.38 Circular Arrangement of NDB Cluster Nodes

图21.38 ndb集群节点的圆形布置

Content is described in the surrounding text.

In this example, we have 8 data nodes, numbered 5, 6, 7, 8, 12, 13, 14, and 15, ordered clockwise in a circle. We determine left and right from the interior of the circle. The node to the left of node 5 is node 15, and the node to the right of node 5 is node 6. You can see all these relationships by running the following query and observing the output:

在本例中,我们有8个数据节点,编号为5、6、7、8、12、13、14和15,按顺时针顺序排列成一个圆圈。我们从圆的内部确定“左”和“右”。节点5左侧的节点是节点15,节点5右侧的节点是节点6。通过运行以下查询并观察输出,可以看到所有这些关系:

mysql> SELECT node_id,left_node,right_node
    -> FROM ndbinfo.membership;
+---------+-----------+------------+
| node_id | left_node | right_node |
+---------+-----------+------------+
|       5 |        15 |          6 |
|       6 |         5 |          7 |
|       7 |         6 |          8 |
|       8 |         7 |         12 |
|      12 |         8 |         13 |
|      13 |        12 |         14 |
|      14 |        13 |         15 |
|      15 |        14 |          5 |
+---------+-----------+------------+
8 rows in set (0.00 sec)

The designations left and right are used in the event log in the same way.

事件日志中使用“left”和“right”的方式相同。

The president node is the node viewed by the current node as responsible for setting an arbitrator (see NDB Cluster Start Phases). If the president fails or becomes disconnected, the current node expects the node whose ID is shown in the successor column to become the new president. The succession_order column shows the place in the succession queue that the current node views itself as having.

president节点是当前节点认为负责设置仲裁器的节点(请参阅ndb cluster start phases)。如果总裁失败或断开连接,则当前节点希望在“继任者”列中显示其ID的节点成为新总裁。“序列顺序”列显示当前节点认为自己在序列队列中所处的位置。

In a normal NDB Cluster, all data nodes should see the same node as president, and the same node (other than the president) as its successor. In addition, the current president should see itself as 1 in the order of succession, the successor node should see itself as 2, and so on.

在一个正常的ndb集群中,所有数据节点都应该看到与president相同的节点,以及与其后续节点相同的节点(president除外)。此外,现任总统应该按照继承顺序将自己视为1,继承节点应该将自己视为2,依此类推。

All nodes should show the same arb_ticket values as well as the same arb_state values. Possible arb_state values are ARBIT_NULL, ARBIT_INIT, ARBIT_FIND, ARBIT_PREP1, ARBIT_PREP2, ARBIT_START, ARBIT_RUN, ARBIT_CHOOSE, ARBIT_CRASH, and UNKNOWN.

所有节点应显示相同的arb_票证值以及相同的arb_状态值。可能的arb_state值为arbit_null、arbit_init、arbit_find、arbit_prep1、arbit_prep2、arbit_start、arbit_run、arbit_choose、arbit_crash和unknown。

arb_connected shows whether this node is connected to the node shown as this node's arbitrator.

arb_connected显示该节点是否连接到显示为该节点的仲裁器的节点。

The connected_rank1_arbs and connected_rank2_arbs columns each display a list of 0 or more arbitrators having an ArbitrationRank equal to 1, or to 2, respectively.

connected_rank1_arbs和connected_rank2_arbs列分别显示仲裁秩分别为1或2的0个或多个仲裁员的列表。

Note

Both management nodes and API nodes are eligible to become arbitrators.

管理节点和api节点都有资格成为仲裁员。

21.5.10.26 The ndbinfo memoryusage Table

Querying this table provides information similar to that provided by the ALL REPORT MemoryUsage command in the ndb_mgm client, or logged by ALL DUMP 1000.

查询此表提供的信息与ndb-mgm客户端中的all report memoryusage命令提供的信息相似,或与all dump 1000记录的信息相似。

The following table provides information about the columns in the memoryusage table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关memoryUsage表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.382 Columns of the memoryusage table

表21.382 MemoryUsage表的列

Column Name Type Description
node_id integer The node ID of this data node.
memory_type string One of Data memory, Index memory, or Long message buffer.
used integer Number of bytes currently used for data memory or index memory by this data node.
used_pages integer Number of pages currently used for data memory or index memory by this data node; see text.
total integer Total number of bytes of data memory or index memory available for this data node; see text.
total_pages integer Total number of memory pages available for data memory or index memory on this data node; see text.

The total column represents the total amount of memory in bytes available for the given resource (data memory or index memory) on a particular data node. This number should be approximately equal to the setting of the corresponding configuration parameter in the config.ini file.

total列表示特定数据节点上给定资源(数据内存或索引内存)可用的内存总量(字节)。这个数字应该大致相当于文件中相应的配置参数的设置。

Suppose that the cluster has 2 data nodes having node IDs 5 and 6, and the config.ini file contains the following:

假设集群有2个数据节点,节点ID为5和6,config.ini文件包含以下内容:

[ndbd default]
DataMemory = 1G
IndexMemory = 1G

Suppose also that the value of the LongMessageBuffer configuration parameter is allowed to assume its default (64 MB).

还假设longMessageBuffer配置参数的值允许采用其默认值(64 MB)。

The following query shows approximately the same values:

下面的查询显示大致相同的值:

mysql> SELECT node_id, memory_type, total
     > FROM ndbinfo.memoryusage;
+---------+---------------------+------------+
| node_id | memory_type         | total      |
+---------+---------------------+------------+
|       5 | Data memory         | 1073741824 |
|       5 | Index memory        | 1074003968 |
|       5 | Long message buffer |   67108864 |
|       6 | Data memory         | 1073741824 |
|       6 | Index memory        | 1074003968 |
|       6 | Long message buffer |   67108864 |
+---------+---------------------+------------+
6 rows in set (0.00 sec)

In this case, the total column values for index memory are slightly higher than the value set of IndexMemory due to internal rounding.

在这种情况下,由于内部舍入,索引内存的总列值略高于index memory的值集。

For the used_pages and total_pages columns, resources are measured in pages, which are 32K in size for DataMemory and 8K for IndexMemory. For long message buffer memory, the page size is 256 bytes.

对于used_pages和total_pages列,以页为单位度量资源,datamemory的大小为32k,indexmemory的大小为8k。对于长消息缓冲区内存,页大小为256字节。

21.5.10.27 The ndbinfo memory_per_fragment Table

The memory_per_fragment table provides information about the usage of memory by individual fragments.

memory_per_fragment表提供有关各个片段使用内存的信息。

The following table provides information about the columns in the memory_per_fragment table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供有关内存碎片表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.383 Columns of the memory_per_fragment table

表21.383每个内存碎片表的列

Column Name Type Description
fq_name string Name of this fragment
parent_fq_name string Name of this fragment's parent
type string Type of object; see text for possible values
table_id integer Table ID for this table
node_id integer Node ID for this node
block_instance integer Kernel block instance ID
fragment_num integer Fragment ID (number)
fixed_elem_alloc_bytes integer Number of bytes allocated for fixed-sized elements
fixed_elem_free_bytes integer Free bytes remaining in pages allocated to fixed-size elements
fixed_elem_size_bytes integer Length of each fixed-size element in bytes
fixed_elem_count integer Number of fixed-size elements
fixed_elem_free_count decimal Number of free rows for fixed-size elements
var_elem_alloc_bytes integer Number of bytes allocated for variable-size elements
var_elem_free_bytes integer Free bytes remaining in pages allocated to variable-size elements
var_elem_count integer Number of variable-size elements
hash_index_alloc_bytes integer Number of bytes allocated to hash indexes

The type column from this table shows the dictionary object type used for this fragment (Object::Type, in the NDB API), and can take any one of the values shown in the following list:

此表中的type列显示用于此片段的dictionary对象类型(在ndb api中为object::type),并且可以接受以下列表中显示的任何一个值:

  • System table

    系统表

  • User table

    用户表

  • Unique hash index

    唯一哈希索引

  • Hash index

    哈希索引

  • Unique ordered index

    唯一有序索引

  • Ordered index

    有序索引

  • Hash index trigger

    哈希索引触发器

  • Subscription trigger

    订阅触发器

  • Read only constraint

    只读约束

  • Index trigger

    索引触发器

  • Reorganize trigger

    重新组织触发器

  • Tablespace

    表空间

  • Log file group

    日志文件组

  • Data file

    数据文件

  • Undo file

    撤消文件

  • Hash map

    哈希映射

  • Foreign key definition

    外键定义

  • Foreign key parent trigger

    外键父触发器

  • Foreign key child trigger

    外键子触发器

  • Schema transaction

    架构事务

You can also obtain this list by executing SELECT * FROM ndbinfo.dict_obj_types in the mysql client.

您还可以通过在mysql客户机中执行select*from ndbinfo.dict_obj_types来获取此列表。

The block_instance column provides the NDB kernel block instance number. You can use this to obtain information about specific threads from the threadblocks table.

block_instance列提供ndb内核块实例号。可以使用此命令从threadblocks表中获取有关特定线程的信息。

21.5.10.28 The ndbinfo nodes Table

This table contains information on the status of data nodes. For each data node that is running in the cluster, a corresponding row in this table provides the node's node ID, status, and uptime. For nodes that are starting, it also shows the current start phase.

此表包含有关数据节点状态的信息。对于集群中运行的每个数据节点,此表中的相应行提供节点的节点ID、状态和正常运行时间。对于正在启动的节点,它还显示当前的启动阶段。

The following table provides information about the columns in the nodes table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关节点表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.384 Columns of the nodes table

表21.384节点表列

Column Name Type Description
node_id integer The data node's unique node ID in the cluster.
uptime integer Time since the node was last started, in seconds.
status string Current status of the data node; see text for possible values.
start_phase integer If the data node is starting, the current start phase.
config_generation integer The version of the cluster configuration file in use on this data node.

The uptime column shows the time in seconds that this node has been running since it was last started or restarted. This is a BIGINT value. This figure includes the time actually needed to start the node; in other words, this counter starts running the moment that ndbd or ndbmtd is first invoked; thus, even for a node that has not yet finished starting, uptime may show a nonzero value.

“正常运行时间”列显示自上次启动或重新启动该节点以来该节点已运行的时间(秒)。这是一个bigint值。此图包括启动节点所需的实际时间;换句话说,此计数器在首次调用ndbd或ndbmtd时开始运行;因此,即使对于尚未完成启动的节点,正常运行时间也可能显示非零值。

The status column shows the node's current status. This is one of: NOTHING, CMVMI, STARTING, STARTED, SINGLEUSER, STOPPING_1, STOPPING_2, STOPPING_3, or STOPPING_4. When the status is STARTING, you can see the current start phase in the start_phase column (see later in this section). SINGLEUSER is displayed in the status column for all data nodes when the cluster is in single user mode (see Section 21.5.8, “NDB Cluster Single User Mode”). Seeing one of the STOPPING states does not necessarily mean that the node is shutting down but can mean rather that it is entering a new state. For example, if you put the cluster in single user mode, you can sometimes see data nodes report their state briefly as STOPPING_2 before the status changes to SINGLEUSER.

状态列显示节点的当前状态。这是其中之一:无,cmvmi,启动,启动,单用户,停止1,停止2,停止3,或停止4。当状态为“开始”时,您可以在“开始阶段”列中看到当前的开始阶段(请参阅本节后面的内容)。当集群处于单用户模式时,所有数据节点的状态列中都会显示单用户(请参阅21.5.8节,“ndb集群单用户模式”)。看到其中一个停止状态并不一定意味着节点正在关闭,而是意味着它正在进入一个新的状态。例如,如果将集群置于单用户模式,有时可以看到数据节点在状态更改为单用户之前将其状态短暂地报告为stopping_2。

The start_phase column uses the same range of values as those used in the output of the ndb_mgm client node_id STATUS command (see Section 21.5.2, “Commands in the NDB Cluster Management Client”). If the node is not currently starting, then this column shows 0. For a listing of NDB Cluster start phases with descriptions, see Section 21.5.1, “Summary of NDB Cluster Start Phases”.

start_phase列使用的值范围与ndb_mgm client node_id status命令输出中使用的值范围相同(请参阅第21.5.2节“ndb cluster management client中的命令”)。如果节点当前未启动,则此列显示0。有关具有说明的ndb集群启动阶段的列表,请参阅第21.5.1节“ndb集群启动阶段摘要”。

The config_generation column shows which version of the cluster configuration is in effect on each data node. This can be useful when performing a rolling restart of the cluster in order to make changes in configuration parameters. For example, from the output of the following SELECT statement, you can see that node 3 is not yet using the latest version of the cluster configuration (6) although nodes 1, 2, and 4 are doing so:

config_generation列显示每个数据节点上的群集配置版本。当执行集群的滚动重新启动以更改配置参数时,这非常有用。例如,从以下select语句的输出中,可以看到节点3尚未使用最新版本的群集配置(6),尽管节点1、2和4正在使用:

mysql> USE ndbinfo;
Database changed
mysql> SELECT * FROM nodes;
+---------+--------+---------+-------------+-------------------+
| node_id | uptime | status  | start_phase | config_generation |
+---------+--------+---------+-------------+-------------------+
|       1 |  10462 | STARTED |           0 |                 6 |
|       2 |  10460 | STARTED |           0 |                 6 |
|       3 |  10457 | STARTED |           0 |                 5 |
|       4 |  10455 | STARTED |           0 |                 6 |
+---------+--------+---------+-------------+-------------------+
2 rows in set (0.04 sec)

Therefore, for the case just shown, you should restart node 3 to complete the rolling restart of the cluster.

因此,对于刚刚显示的情况,应该重新启动节点3以完成集群的滚动重新启动。

Nodes that are stopped are not accounted for in this table. Suppose that you have an NDB Cluster with 4 data nodes (node IDs 1, 2, 3 and 4), and all nodes are running normally, then this table contains 4 rows, 1 for each data node:

此表中不考虑已停止的节点。假设您有一个包含4个数据节点(节点ID 1、2、3和4)的ndb集群,并且所有节点都正常运行,那么此表包含4行,每个数据节点1行:

mysql> USE ndbinfo;
Database changed
mysql> SELECT * FROM nodes;
+---------+--------+---------+-------------+-------------------+
| node_id | uptime | status  | start_phase | config_generation |
+---------+--------+---------+-------------+-------------------+
|       1 |  11776 | STARTED |           0 |                 6 |
|       2 |  11774 | STARTED |           0 |                 6 |
|       3 |  11771 | STARTED |           0 |                 6 |
|       4 |  11769 | STARTED |           0 |                 6 |
+---------+--------+---------+-------------+-------------------+
4 rows in set (0.04 sec)

If you shut down one of the nodes, only the nodes that are still running are represented in the output of this SELECT statement, as shown here:

如果关闭其中一个节点,则此select语句的输出中仅表示仍在运行的节点,如下所示:

ndb_mgm> 2 STOP
Node 2: Node shutdown initiated
Node 2: Node shutdown completed.
Node 2 has shutdown.
mysql> SELECT * FROM nodes;
+---------+--------+---------+-------------+-------------------+
| node_id | uptime | status  | start_phase | config_generation |
+---------+--------+---------+-------------+-------------------+
|       1 |  11807 | STARTED |           0 |                 6 |
|       3 |  11802 | STARTED |           0 |                 6 |
|       4 |  11800 | STARTED |           0 |                 6 |
+---------+--------+---------+-------------+-------------------+
3 rows in set (0.02 sec)

21.5.10.29 The ndbinfo operations_per_fragment Table

The operations_per_fragment table provides information about the operations performed on individual fragments and fragment replicas, as well as about some of the results from these operations.

operations_per_fragment表提供有关在单个片段和片段副本上执行的操作的信息,以及有关这些操作的某些结果的信息。

The following table provides information about the columns in the operations_per_fragment table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关operations_per_fragment表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.385 Columns of the operations_per_fragment table

表21.385每个操作的列碎片表

Column Name Type Description
fq_name string Name of this fragment
parent_fq_name string Name of this fragment's parent
type string Type of object; see text for possible values
table_id integer Table ID for this table
node_id integer Node ID for this node
block_instance integer Kernel block instance ID
fragment_num integer Fragment ID (number)
tot_key_reads integer Total number of key reads for this fragment replica
tot_key_inserts integer Total number of key inserts for this fragment replica
tot_key_updates integer total number of key updates for this fragment replica
tot_key_writes integer Total number of key writes for this fragment replica
tot_key_deletes integer Total number of key deletes for this fragment replica
tot_key_refs integer Number of key operations refused
tot_key_attrinfo_bytes integer Total size of all attrinfo attributes
tot_key_keyinfo_bytes integer Total size of all keyinfo attributes
tot_key_prog_bytes integer Total size of all interpreted programs carried by attrinfo attributes
tot_key_inst_exec integer Total number of instructions executed by interpreted programs for key operations
tot_key_bytes_returned integer Total size of all data and metadata returned from key read operations
tot_frag_scans integer Total number of scans performed on this fragment replica
tot_scan_rows_examined integer Total number of rows examined by scans
tot_scan_rows_returned integer Total number of rows returned to client
tot_scan_bytes_returned integer Total size of data and metadata returned to the client
tot_scan_prog_bytes integer Total size of interpreted programs for scan operations
tot_scan_bound_bytes integer Total size of all bounds used in ordered index scans
tot_scan_inst_exec integer Total number of instructions executed for scans
tot_qd_frag_scans integer Number of times that scans of this fragment replica have been queued
conc_frag_scans integer Number of scans currently active on this fragment replica (excluding queued scans)
conc_qd_frag_scans integer Number of scans currently queued for this fragment replica
tot_commits integer Total number of row changes committed to this fragment replica

The fq_name contains the fully qualified name of the schema object to which this fragment replica belongs. This currently has the following formats:

fq_名称包含此片段副本所属架构对象的完全限定名。目前有以下格式:

  • Base table: - DbName/def/TblName

    基表:-dbname/def/tblname

  • BLOB table: - DbName/def/NDB$BLOB_BaseTblId_ColNo

    blob表:-dbname/def/ndb$blob_basetblid_colno

  • Ordered index: - sys/def/BaseTblId/IndexName

    有序索引:-sys/def/basetblid/indexname

  • Unique index: - sys/def/BaseTblId/IndexName$unique

    唯一索引:-sys/def/basetblid/indexname$unique

The $unique suffix shown for unique indexes is added by mysqld; for an index created by a different NDB API client application, this may differ, or not be present.

为唯一索引显示的$unique后缀由mysqld添加;对于由不同的ndb api客户端应用程序创建的索引,这可能不同,或者不存在。

The syntax just shown for fully qualified object names is an internal interface which is subject to change in future releases.

刚刚为完全限定对象名显示的语法是一个内部接口,在将来的版本中可能会有所更改。

Consider a table t1 created and modified by the following SQL statements:

考虑由以下SQL语句创建和修改的表T1:

CREATE DATABASE mydb;

USE mydb;

CREATE TABLE t1 (
  a INT NOT NULL,
  b INT NOT NULL,
  t TEXT NOT NULL,
  PRIMARY KEY (b)
) ENGINE=ndbcluster;

CREATE UNIQUE INDEX ix1 ON t1(b) USING HASH;

If t1 is assigned table ID 11, this yields the fq_name values shown here:

如果T1被分配到表ID 11,这将产生如下所示的fq_name值:

  • Base table: mydb/def/t1

    基表:mydb/def/t1

  • BLOB table: mydb/def/NDB$BLOB_11_2

    blob表:mydb/def/ndb$blob_11_2

  • Ordered index (primary key): sys/def/11/PRIMARY

    顺序索引(主键):sys/def/11/primary

  • Unique index: sys/def/11/ix1$unique

    唯一索引:sys/def/11/ix1$unique

For indexes or BLOB tables, the parent_fq_name column contains the fq_name of the corresponding base table. For base tables, this column is always NULL.

对于索引或blob表,parent_fq_name列包含相应基表的fq_名称。对于基表,此列始终为空。

The type column shows the schema object type used for this fragment, which can take any one of the values System table, User table, Unique hash index, or Ordered index. BLOB tables are shown as User table.

type列显示用于此片段的架构对象类型,它可以采用系统表、用户表、唯一哈希索引或顺序索引中的任何一个值。blob表显示为用户表。

The table_id column value is unique at any given time, but can be reused if the corresponding object has been deleted. The same ID can be seen using the ndb_show_tables utility.

table_id列值在任何给定时间都是唯一的,但如果相应的对象已被删除,则可以重用该值。使用ndb_show_tables实用程序可以看到相同的id。

The block_instance column shows which LDM instance this fragment replica belongs to. You can use this to obtain information about specific threads from the threadblocks table. The first such instance is always numbered 0.

block_instance列显示此片段副本属于哪个ldm实例。可以使用此命令从threadblocks表中获取有关特定线程的信息。第一个这样的实例总是编号为0。

Since there are typically two replicas, and assuming that this is so, each fragment_num value should appear twice in the table, on two different data nodes from the same node group.

由于通常有两个副本,并且假设是这样,每个片段的值应该在表中出现两次,出现在同一节点组的两个不同数据节点上。

Since NDB does not use single-key access for ordered indexes, the counts for tot_key_reads, tot_key_inserts, tot_key_updates, tot_key_writes, and tot_key_deletes are not incremented by ordered index operations.

由于ndb不对顺序索引使用单键访问,因此顺序索引操作不会增加tot_key_reads、tot_key_inserts、tot_key_updates、tot_key_writes和tot_key_deletes的计数。

Note

When using tot_key_writes, you should keep in mind that a write operation in this context updates the row if the key exists, and inserts a new row otherwise. (One use of this is in the NDB implementation of the REPLACE SQL statement.)

当使用ToTyKiKyWrrx时,您应该记住,如果存在该键,在此上下文中的写入操作将更新行,否则将插入新行。(在replace sql语句的ndb实现中使用了这种方法。)

The tot_key_refs column shows the number of key operations refused by the LDM. Generally, such a refusal is due to duplicate keys (inserts), Key not found errors (updates, deletes, and reads), or the operation was rejected by an interpreted program used as a predicate on the row matching the key.

tot_key_refs列显示LDM拒绝的密钥操作数。通常,这种拒绝是由于重复的键(插入)、找不到键的错误(更新、删除和读取)或操作被解释程序拒绝,该程序用作与键匹配的行上的谓词。

The attrinfo and keyinfo attributes counted by the tot_key_attrinfo_bytes and tot_key_keyinfo_bytes columns are attributes of an LQHKEYREQ signal (see The NDB Communication Protocol) used to initiate a key operation by the LDM. An attrinfo typically contains tuple field values (inserts and updates) or projection specifications (for reads); keyinfo contains the primary or unique key needed to locate a given tuple in this schema object.

由tot_key_attrinfo_bytes和tot_key_keyinfo_bytes列计算的attrinfo和keyinfo属性是用于由ldm启动密钥操作的lqhkeyreq信号(参见ndb通信协议)的属性。attrinfo通常包含元组字段值(插入和更新)或投影规范(用于读取);keyinfo包含定位此架构对象中给定元组所需的主键或唯一键。

The value shown by tot_frag_scans includes both full scans (that examine every row) and scans of subsets. Unique indexes and BLOB tables are never scanned, so this value, like other scan-related counts, is 0 for fragment replicas of these.

tot_frag_scans显示的值包括完整扫描(检查每一行)和子集扫描。从不扫描唯一索引和blob表,因此与其他扫描相关的计数一样,这些索引和blob表的片段副本的此值为0。

tot_scan_rows_examined may display less than the total number of rows in a given fragment replica, since ordered index scans can limited by bounds. In addition, a client may choose to end a scan before all potentially matching rows have been examined; this occurs when using an SQL statement containing a LIMIT or EXISTS clause, for example. tot_scan_rows_returned is always less than or equal to tot_scan_rows_examined.

检查的tot_scan_rows_显示的行数可能小于给定片段副本中的行总数,因为顺序索引扫描可能受边界限制。此外,客户机可以选择在检查所有可能匹配的行之前结束扫描;例如,当使用包含LIMIT或EXISTS子句的SQL语句时,就会发生这种情况。返回的tot_scan_rows_始终小于或等于所检查的tot_scan_rows_。

tot_scan_bytes_returned includes, in the case of pushed joins, projections returned to the DBSPJ block in the NDB kernel.

返回的tot_scan_bytes_在推送连接的情况下包括返回到ndb内核中的dbspj块的投影。

tot_qd_frag_scans can be effected by the setting for the MaxParallelScansPerFragment data node configuration parameter, which limits the number of scans that may execute concurrently on a single fragment replica.

tot_qd_frag_扫描可以通过maxpallelscansperfragment数据节点配置参数的设置来影响,该参数限制可以在单个片段副本上同时执行的扫描数。

21.5.10.30 The ndbinfo processes Table

This table contains information about NDB Cluster node processes; each node is represented by the row in the table. Only nodes that are connected to the cluster are shown in this table. You can obtain information about nodes that are configured but not connected to the cluster from the nodes and config_nodes tables.

此表包含有关ndb群集节点进程的信息;每个节点由表中的行表示。此表中仅显示连接到群集的节点。您可以从nodes和config_nodes表中获取有关已配置但未连接到群集的节点的信息。

The following table provides information about the columns in the processes table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关进程表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.386 Columns of the nodes table

表21.386节点表列

Column Name Type Description
node_id integer The node's unique node ID in the cluster
node_type string Type of node (management, data, or API node; see text)
node_version string Version of the NDB software program running on this node.
process_id integer This node's process ID
angel_process_id integer Process ID of this node's angel process
process_name string Name of the executable
service_URI string Service URI of this node (see text)

node_id is the ID assigned to this node in the cluster.

node_id是分配给集群中此节点的id。

The node_type column displays one of the following three values:

“节点类型”列显示以下三个值之一:

  • MGM: Management node.

    管理节点。

  • NDB: Data node.

    ndb:数据节点。

  • API: API or SQL node.

    api:api或sql节点。

For an executable shipped with the NDB Cluster distribution, node_version shows the two-part MySQL NDB Cluster version string, such as 5.7.28-ndb-7.5.16 or 5.7.28-ndb-7.6.12, that it was compiled with. See Version strings used in NDB Cluster software, for more information.

对于ndb集群发行版附带的可执行文件,node_version显示了编译时使用的两部分mysql ndb集群版本字符串,如5.7.28-ndb-7.5.16或5.7.28-ndb-7.6.12。有关详细信息,请参阅ndb群集软件中使用的版本字符串。

process_id is the node executable's process ID as shown by the host operating system using a process display application such as top on Linux, or the Task Manager on Windows platforms.

process_id是节点可执行文件的进程id,由主机操作系统使用进程显示应用程序(如linux上的top或windows平台上的任务管理器)显示。

angel_process_id is the system process ID for the node's angel process, which ensures that a data node or SQL is automatically restarted in cases of failures. For management nodes and API nodes other than SQL nodes, the value of this column is NULL.

angel_process_id是节点的angel进程的系统进程id,它确保在发生故障时自动重新启动数据节点或sql。对于SQL节点以外的管理节点和API节点,此列的值为空。

The process_name column shows the name of the running executable. For management nodes, this is ndb_mgmd. For data nodes, this is ndbd (single-threaded) or ndbmtd (multithreaded). For SQL nodes, this is mysqld. For other types of API nodes, it is the name of the executable program connected to the cluster; NDB API applications can set a custom value for this using Ndb_cluster_connection::set_name().

进程名列显示正在运行的可执行文件的名称。对于管理节点,这是ndb_mgmd。对于数据节点,这是ndbd(单线程)或ndbmtd(多线程)。对于sql节点,这是mysqld。对于其他类型的api节点,它是连接到集群的可执行程序的名称;ndb api应用程序可以使用ndb_cluster_connection::set_name()为此设置自定义值。

service_URI shows the service network address. For management nodes and data nodes, the scheme used is ndb://. For SQL nodes, this is mysql://. By default, API nodes other than SQL nodes use ndb:// for the scheme; NDB API applications can set this to a custom value using Ndb_cluster_connection::set_service_uri(). regardless of the node type, the scheme is followed by the IP address used by the NDB transporter for the node in question. For management nodes and SQL nodes, this address includes the port number (usually 1186 for management nodes and 3306 for SQL nodes). If the SQL node was started with the bind_address system variable set, this address is used instead of the transporter address, unless the bind address is set to *, 0.0.0.0, or ::.

service_uri显示服务网络地址。对于管理节点和数据节点,使用的方案是ndb://。对于sql节点,这是mysql://。默认情况下,除SQL节点以外的API节点使用ndb://作为方案;ndb api应用程序可以使用ndb_cluster_connection::set_service_uri()将其设置为自定义值。无论节点类型如何,该方案后面都是ndb传输程序为所述节点使用的ip地址。对于管理节点和sql节点,此地址包括端口号(通常管理节点为1186,sql节点为3306)。如果sql节点是用bind_address系统变量集启动的,则使用此地址而不是传输程序地址,除非绑定地址设置为*、0.0.0.0或::。

Additional path information may be included in the service_URI value for an SQL node reflecting various configuration options. For example, mysql://198.51.100.3/tmp/mysql.sock indicates that the SQL node was started with --skip-networking, and mysql://198.51.100.3:3306/?server-id=1 shows that replication is enabled for this SQL node.

附加路径信息可以包含在反映各种配置选项的sql节点的service uri值中。例如,mysql://198.51.100.3/tmp/mysql.sock指示sql节点以--skip networking和mysql://198.51.100.3:3306/?服务器ID=1表示已为此SQL节点启用复制。

The processes table was added in NDB 7.5.7 and NDB 7.6.2.

在ndb 7.5.7和ndb7.6.2中添加了进程表。

21.5.10.31 The ndbinfo resources Table

This table provides information about data node resource availability and usage.

此表提供有关数据节点资源可用性和使用情况的信息。

These resources are sometimes known as super-pools.

这些资源有时被称为超级池。

The following table provides information about the columns in the resources table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关资源表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.387 Columns of the resources table

表21.387资源表的列

Column Name Type Description
node_id integer The unique node ID of this data node.
resource_name string Name of the resource; see text.
reserved integer The amount reserved for this resource.
used integer The amount actually used by this resource.
max integer The maximum amount of this resource used, since the node was last started.

The resource_name can be one of the names shown in the following table:

资源名称可以是下表中显示的名称之一:

Table 21.388 ndbinfo.resources table resource names and descriptions

表21.388 ndbinfo.resources表资源名称和说明

Resource name Description
RESERVED Reserved by the system; cannot be overridden.
DISK_OPERATIONS If a log file group is allocated, the size of the undo log buffer is used to set the size of this resource. This resource is used only to allocate the undo log buffer for an undo log file group; there can only be one such group. Overallocation occurs as needed by CREATE LOGFILE GROUP.
DISK_RECORDS Records allocated for Disk Data operations.
DATA_MEMORY Used for main memory tuples, indexes, and hash indexes. Sum of DataMemory and IndexMemory, plus 8 pages of 32 KB each if IndexMemory has been set. Cannot be overallocated.
JOBBUFFER Used for allocating job buffers by the NDB scheduler; cannot be overallocated. This is approximately 2 MB per thread plus a 1 MB buffer in both directions for all threads that can communicate. For large configurations this consume several GB.
FILE_BUFFERS Used by the redo log handler in the DBLQH kernel block; cannot be overallocated. Size is NoOfFragmentLogParts * RedoBuffer, plus 1 MB per log file part.
TRANSPORTER_BUFFERS Used for send buffers by ndbmtd; the sum of TotalSendBufferMemory and ExtraSendBufferMemory. This resource that can be overallocated by up to 25 percent. TotalSendBufferMemory is calculated by summing the send buffer memory per node, the default value of which is 2 MB. Thus, in a system having four data nodes and eight API nodes, the data nodes have 12 * 2 MB send buffer memory. ExtraSendBufferMemory is used by ndbmtd and amounts to 2 MB extra memory per thread. Thus, with 4 LDM threads, 2 TC threads, 1 main thread, 1 replication thread, and 2 receive threads, ExtraSendBufferMemory is 10 * 2 MB. Overallocation of this resource can be performed by setting the SharedGlobalMemory data node configuration parameter.
DISK_PAGE_BUFFER Used for the disk page buffer; determined by the DiskPageBufferMemory configuration parameter. Cannot be overallocated.
QUERY_MEMORY Used by the DBSPJ kernel block.
SCHEMA_TRANS_MEMORY Minimum is 2 MB; can be overallocated to use any remaining available memory.

21.5.10.32 The ndbinfo restart_info Table

The restart_info table contains information about node restart operations. Each entry in the table corresponds to a node restart status report in real time from a data node with the given node ID. Only the most recent report for any given node is shown.

restart_info表包含有关节点重新启动操作的信息。表中的每个条目对应于具有给定节点ID的数据节点的实时节点重新启动状态报告。仅显示任何给定节点的最新报告。

The following table provides information about the columns in the restart_info table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关重新启动信息表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.389 Columns of the restart_info table

表21.389重新启动信息表的列

Column Name Type Description
node_id integer Node ID in the cluster
node_restart_status VARCHAR(256) Node status; see text for values. Each of these corresponds to a possible value of node_restart_status_int.
node_restart_status_int integer Node status code; see text for values.
secs_to_complete_node_failure integer Time in seconds to complete node failure handling
secs_to_allocate_node_id integer Time in seconds from node failure completion to allocation of node ID
secs_to_include_in_heartbeat_protocol integer Time in seconds from allocation of node ID to inclusion in heartbeat protocol
secs_until_wait_for_ndbcntr_master integer Time in seconds from being included in heartbeat protocol until waiting for NDBCNTR master began
secs_wait_for_ndbcntr_master integer Time in seconds spent waiting to be accepted by NDBCNTR master for starting
secs_to_get_start_permitted integer Time in seconds elapsed from receiving of permission for start from master until all nodes have accepted start of this node
secs_to_wait_for_lcp_for_copy_meta_data integer Time in seconds spent waiting for LCP completion before copying meta data
secs_to_copy_meta_data integer Time in seconds required to copy metadata from master to newly starting node
secs_to_include_node integer Time in seconds waited for GCP and inclusion of all nodes into protocols
secs_starting_node_to_request_local_recovery integer Time in seconds that the node just starting spent waiting to request local recovery
secs_for_local_recovery integer Time in seconds required for local recovery by node just starting
secs_restore_fragments integer Time in seconds required to restore fragments from LCP files
secs_undo_disk_data integer Time in seconds required to execute undo log on disk data part of records
secs_exec_redo_log integer Time in seconds required to execute redo log on all restored fragments
secs_index_rebuild integer Time in seconds required to rebuild indexes on restored fragments
secs_to_synchronize_starting_node integer Time in seconds required to synchronize starting node from live nodes
secs_wait_lcp_for_restart integer Time in seconds required for LCP start and completion before restart was completed
secs_wait_subscription_handover integer Time in seconds spent waiting for handover of replication subscriptions
total_restart_secs integer Total number of seconds from node failure until node is started again

Defined values for node_restart_status_int and corresponding status names and messages (node_restart_status) are shown in the following table:

下表显示了为node_restart_status_int定义的值以及相应的状态名称和消息(node_restart_status):

Table 21.390 Status codes and messages used in the restart_info table

表21.390重启信息表中使用的状态代码和消息

Column Name Type Description
0 ALLOCATED_NODE_ID Allocated node id
1 INCLUDED_IN_HB_PROTOCOL Included in heartbeat protocol
2 NDBCNTR_START_WAIT Wait for NDBCNTR master to permit us to start
3 NDBCNTR_STARTED NDBCNTR master permitted us to start
4 START_PERMITTED All nodes permitted us to start
5 WAIT_LCP_TO_COPY_DICT Wait for LCP completion to start copying metadata
6 COPY_DICT_TO_STARTING_NODE Copying metadata to starting node
7 INCLUDE_NODE_IN_LCP_AND_GCP Include node in LCP and GCP protocols
8 LOCAL_RECOVERY_STARTED Restore fragments ongoing
9 COPY_FRAGMENTS_STARTED Synchronizing starting node with live nodes
10 WAIT_LCP_FOR_RESTART Wait for LCP to ensure durability
11 WAIT_SUMA_HANDOVER Wait for handover of subscriptions
12 RESTART_COMPLETED Restart completed
13 NODE_FAILED Node failed, failure handling in progress
14 NODE_FAILURE_COMPLETED Node failure handling completed
15 NODE_GETTING_PERMIT All nodes permitted us to start
16 NODE_GETTING_INCLUDED Include node in LCP and GCP protocols
17 NODE_GETTING_SYNCHED Synchronizing starting node with live nodes
18 NODE_GETTING_LCP_WAITED [none]
19 NODE_ACTIVE Restart completed
20 NOT_DEFINED_IN_CLUSTER [none]
21 NODE_NOT_RESTARTED_YET Initial state

Status numbers 0 through 12 apply on master nodes only; the remainder of those shown in the table apply to all restarting data nodes. Status numbers 13 and 14 define node failure states; 20 and 21 occur when no information about the restart of a given node is available.

状态号0到12仅适用于主节点;表中显示的其余状态号适用于所有重新启动的数据节点。状态号13和14定义节点故障状态;20和21在没有有关给定节点重新启动的信息时发生。

See also Section 21.5.1, “Summary of NDB Cluster Start Phases”.

另见第21.5.1节,“ndb集群启动阶段总结”。

21.5.10.33 The ndbinfo server_locks Table

The server_locks table is similar in structure to the cluster_locks table, and provides a subset of the information found in the latter table, but which is specific to the SQL node (MySQL server) where it resides. (The cluster_locks table provides information about all locks in the cluster.) More precisely, server_locks contains information about locks requested by threads belonging to the current mysqld instance, and serves as a companion table to server_operations. This may be useful for correlating locking patterns with specific MySQL user sessions, queries, or use cases.

server_locks表在结构上与cluster_locks表类似,它提供后一个表中找到的信息的子集,但它特定于它所在的sql节点(mysql server)。(cluster_locks表提供有关集群中所有锁的信息。)更准确地说,server_locks包含有关属于当前mysqld实例的线程请求的锁的信息,并充当server_操作的伴随表。这可能有助于将锁定模式与特定的mysql用户会话、查询或用例关联起来。

The following table provides information about the columns in the server_locks table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关服务器锁定表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.391 Columns of the server_locks table

表21.391服务器锁定表的列

Column Name Type Description
mysql_connection_id integer MySQL connection ID
node_id integer ID of reporting node
block_instance integer ID of reporting LDM instance
tableid integer ID of table containing this row
fragmentid integer ID of fragment containing locked row
rowid integer ID of locked row
transid integer Transaction ID
mode string Lock request mode
state string Lock state
detail string Whether this is first holding lock in row lock queue
op string Operation type
duration_millis integer Milliseconds spent waiting or holding lock
lock_num integer ID of lock object
waiting_for integer Waiting for lock with this ID

The mysql_connection_id column shows the MySQL connection or thread ID as shown by SHOW PROCESSLIST.

mysql_connection_id列显示mysql连接或线程id,如show processlist所示。

block_instance refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the threadblocks table.

block_instance是指内核块的一个实例。与块名一起,此数字可用于在threadblocks表中查找给定实例。

The tableid is assigned to the table by NDB; the same ID is used for this table in other ndbinfo tables, as well as in the output of ndb_show_tables.

表id由ndb分配给该表;在其他ndbinfo表以及ndb_show_表的输出中,该表使用相同的id。

The transaction ID shown in the transid column is the identifier generated by the NDB API for the transaction requesting or holding the current lock.

transid列中显示的事务id是ndb api为请求或持有当前锁的事务生成的标识符。

The mode column shows the lock mode, which is always one of S (shared lock) or X (exclusive lock). If a transaction has an exclusive lock on a given row, all other locks on that row have the same transaction ID.

mode列显示锁模式,它始终是s(共享锁)或x(独占锁)之一。如果事务在给定行上具有独占锁,则该行上的所有其他锁都具有相同的事务ID。

The state column shows the lock state. Its value is always one of H (holding) or W (waiting). A waiting lock request waits for a lock held by a different transaction.

state列显示锁状态。它的值总是h(等待)或w(等待)中的一个。等待锁请求等待由不同事务持有的锁。

The detail column indicates whether this lock is the first holding lock in the affected row's lock queue, in which case it contains a * (asterisk character); otherwise, this column is empty. This information can be used to help identify the unique entries in a list of lock requests.

detail列指示此锁是否是受影响行的锁队列中的第一个持有锁,在这种情况下,它包含*(星号字符);否则,此列为空。此信息可用于帮助标识锁请求列表中的唯一条目。

The op column shows the type of operation requesting the lock. This is always one of the values READ, INSERT, UPDATE, DELETE, SCAN, or REFRESH.

op列显示请求锁的操作类型。这始终是读取、插入、更新、删除、扫描或刷新的值之一。

The duration_millis column shows the number of milliseconds for which this lock request has been waiting or holding the lock. This is reset to 0 when a lock is granted for a waiting request.

duration_millis列显示此锁定请求等待或保持锁定的毫秒数。当为等待请求授予锁时,此值将重置为0。

The lock ID (lockid column) is unique to this node and block instance.

锁ID(lockID列)对此节点和块实例是唯一的。

If the lock_state column's value is W, this lock is waiting to be granted, and the waiting_for column shows the lock ID of the lock object this request is waiting for. Otherwise, waiting_for is empty. waiting_for can refer only to locks on the same row (as identified by node_id, block_instance, tableid, fragmentid, and rowid).

如果lock_state列的值为w,则此锁正在等待授予,waiting_for列显示此请求正在等待的锁对象的锁ID。否则,等待是空的。等待只能引用同一行上的锁(由节点id、块实例、表id、片段id和行id标识)。

The server_locks table was added in NDB 7.5.3.

服务器锁表是在ndb 7.5.3中添加的。

21.5.10.34 The ndbinfo server_operations Table

The server_operations table contains entries for all ongoing NDB operations that the current SQL node (MySQL Server) is currently involved in. It effectively is a subset of the cluster_operations table, in which operations for other SQL and API nodes are not shown.

server_operations表包含当前sql节点(mysql server)当前参与的所有正在进行的ndb操作的条目。它实际上是cluster_operations表的一个子集,其中不显示其他sql和api节点的操作。

The following table provides information about the columns in the server_operations table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关服务器操作表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.392 Columns of the server_operations table

表21.392服务器操作表的列

Column Name Type Description
mysql_connection_id integer MySQL Server connection ID
node_id integer Node ID
block_instance integer Block instance
transid integer Transaction ID
operation_type string Operation type (see text for possible values)
state string Operation state (see text for possible values)
tableid integer Table ID
fragmentid integer Fragment ID
client_node_id integer Client node ID
client_block_ref integer Client block reference
tc_node_id integer Transaction coordinator node ID
tc_block_no integer Transaction coordinator block number
tc_block_instance integer Transaction coordinator block instance

The mysql_connection_id is the same as the connection or session ID shown in the output of SHOW PROCESSLIST. It is obtained from the INFORMATION_SCHEMA table NDB_TRANSID_MYSQL_CONNECTION_MAP.

mysql_connection_id与show processlist输出中显示的连接或会话id相同。它是从information_schema table ndb_transid_mysql_connection_map获得的。

block_instance refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the threadblocks table.

block_instance是指内核块的一个实例。与块名一起,此数字可用于在threadblocks表中查找给定实例。

The transaction ID (transid) is a unique 64-bit number which can be obtained using the NDB API's getTransactionId() method. (Currently, the MySQL Server does not expose the NDB API transaction ID of an ongoing transaction.)

事务ID(transid)是唯一的64位数字,可以使用ndb api的getTransactionID()方法获得。(目前,mysql服务器不公开正在进行的事务的ndb api事务id。)

The operation_type column can take any one of the values READ, READ-SH, READ-EX, INSERT, UPDATE, DELETE, WRITE, UNLOCK, REFRESH, SCAN, SCAN-SH, SCAN-EX, or <unknown>.

operation_type列可以接受read、read-sh、read-ex、insert、update、delete、write、unlock、refresh、scan、scan-sh、scan-ex或中的任何一个值。

The state column can have any one of the values ABORT_QUEUED, ABORT_STOPPED, COMMITTED, COMMIT_QUEUED, COMMIT_STOPPED, COPY_CLOSE_STOPPED, COPY_FIRST_STOPPED, COPY_STOPPED, COPY_TUPKEY, IDLE, LOG_ABORT_QUEUED, LOG_COMMIT_QUEUED, LOG_COMMIT_QUEUED_WAIT_SIGNAL, LOG_COMMIT_WRITTEN, LOG_COMMIT_WRITTEN_WAIT_SIGNAL, LOG_QUEUED, PREPARED, PREPARED_RECEIVED_COMMIT, SCAN_CHECK_STOPPED, SCAN_CLOSE_STOPPED, SCAN_FIRST_STOPPED, SCAN_RELEASE_STOPPED, SCAN_STATE_USED, SCAN_STOPPED, SCAN_TUPKEY, STOPPED, TC_NOT_CONNECTED, WAIT_ACC, WAIT_ACC_ABORT, WAIT_AI_AFTER_ABORT, WAIT_ATTR, WAIT_SCAN_AI, WAIT_TUP, WAIT_TUPKEYINFO, WAIT_TUP_COMMIT, or WAIT_TUP_TO_ABORT. (If the MySQL Server is running with ndbinfo_show_hidden enabled, you can view this list of states by selecting from the ndb$dblqh_tcconnect_state table, which is normally hidden.)

状态栏可以有任何一个值:异常中止队列、中止、提交、提交、排队、提交、停止、复制、停止、复制、停止、复制、删除、Log-PositQueRead、LogyPrimeQueQueLead、LogyPrimeQueReDeWaITy信号、LogyPrimeRead、LogLogPrimtWrutTyWaITy信号、LogayQueLead、准备、SpReaDeSaveCuffeStION,SCANIONCHECKEY停止,SCANIORSTESTY停止,SCANEXPROSASEY停止,SCANESTATEY使用,SCAN停止,SCANTIOTUKEY,停止,TCYNOTHOLD连接,WAITYAACC,WAITYAACKYBORT,WAITYAI EXEXABORT,WAITHARTAFI,WAITYX SCANEAI,WAITYTUP,WAITUTUPKEY信息,WAITYTUPION提交,WAITYTUPYTOO-ABORT。(如果mysql服务器运行时启用了ndbinfo_show_hidden,则可以从通常隐藏的ndb$dblqh_tcconnect_state表中选择来查看此状态列表。)

You can obtain the name of an NDB table from its table ID by checking the output of ndb_show_tables.

可以通过检查ndb-show-u表的输出,从表id中获取ndb表的名称。

The fragid is the same as the partition number seen in the output of ndb_desc --extra-partition-info (short form -p).

fragid与ndb_desc——extra partition info(缩写-p)的输出中看到的分区号相同。

In client_node_id and client_block_ref, client refers to an NDB Cluster API or SQL node (that is, an NDB API client or a MySQL Server attached to the cluster).

在client_node_id和client_block_ref中,client指的是ndb集群api或sql节点(即ndb api客户端或连接到集群的mysql服务器)。

The block_instance and tc_block_instance column provide NDB kernel block instance numbers. You can use these to obtain information about specific threads from the threadblocks table.

block_instance和tc_block_instance列提供ndb内核块实例号。可以使用它们从threadblocks表中获取有关特定线程的信息。

21.5.10.35 The ndbinfo server_transactions Table

The server_transactions table is subset of the cluster_transactions table, but includes only those transactions in which the current SQL node (MySQL Server) is a participant, while including the relevant connection IDs.

server_transactions表是cluster_transactions表的子集,但仅包括当前sql节点(mysql server)为参与者的事务,同时包括相关的连接id。

The following table provides information about the columns in the server_transactions table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关服务器事务表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.393 Columns of the server_transactions table

表21.393服务器事务表的列

Column Name Type Description
mysql_connection_id integer MySQL Server connection ID
node_id integer Transaction coordinator node ID
block_instance integer Transaction coordinator block instance
transid integer Transaction ID
state string Operation state (see text for possible values)
count_operations integer Number of stateful operations in the transaction
outstanding_operations integer Operations still being executed by local data management layer (LQH blocks)
inactive_seconds integer Time spent waiting for API
client_node_id integer Client node ID
client_block_ref integer Client block reference

The mysql_connection_id is the same as the connection or session ID shown in the output of SHOW PROCESSLIST. It is obtained from the INFORMATION_SCHEMA table NDB_TRANSID_MYSQL_CONNECTION_MAP.

mysql_connection_id与show processlist输出中显示的连接或会话id相同。它是从information_schema table ndb_transid_mysql_connection_map获得的。

block_instance refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the threadblocks table.

block_instance是指内核块的一个实例。与块名一起,此数字可用于在threadblocks表中查找给定实例。

The transaction ID (transid) is a unique 64-bit number which can be obtained using the NDB API's getTransactionId() method. (Currently, the MySQL Server does not expose the NDB API transaction ID of an ongoing transaction.)

事务ID(transid)是唯一的64位数字,可以使用ndb api的getTransactionID()方法获得。(目前,mysql服务器不公开正在进行的事务的ndb api事务id。)

The state column can have any one of the values CS_ABORTING, CS_COMMITTING, CS_COMMIT_SENT, CS_COMPLETE_SENT, CS_COMPLETING, CS_CONNECTED, CS_DISCONNECTED, CS_FAIL_ABORTED, CS_FAIL_ABORTING, CS_FAIL_COMMITTED, CS_FAIL_COMMITTING, CS_FAIL_COMPLETED, CS_FAIL_PREPARED, CS_PREPARE_TO_COMMIT, CS_RECEIVING, CS_REC_COMMITTING, CS_RESTART, CS_SEND_FIRE_TRIG_REQ, CS_STARTED, CS_START_COMMITTING, CS_START_SCAN, CS_WAIT_ABORT_CONF, CS_WAIT_COMMIT_CONF, CS_WAIT_COMPLETE_CONF, CS_WAIT_FIRE_TRIG_REQ. (If the MySQL Server is running with ndbinfo_show_hidden enabled, you can view this list of states by selecting from the ndb$dbtc_apiconnect_state table, which is normally hidden.)

状态列可以有任何一个值CSA中止、CSQIONTION、CSJEngIORY发送、CSJUnEnter、CSHORIN、CSY断开连接、CSJFAILL中止、CSYFAILL中止、CSYFAILJOPEN、CSYFAIL提交、CSYFAIL完成、CSYRebug准备、CSSpReRayToToRION提交、CSX接收、CSLReqIORACTION、CSARREST、CSRADE、cs_send_fire_trig_req,cs_started,cs_start_committing,cs_start_scan,cs_wait_abort_conf,cs_wait_commit_conf,cs_wait_complete_conf,cs_wait_fire_trig_req。(如果mysql服务器运行时启用了ndbinfo_show_hidden,则可以通过从通常隐藏的ndb$dbtc_apiconnect_state表中选择来查看此状态列表。)

In client_node_id and client_block_ref, client refers to an NDB Cluster API or SQL node (that is, an NDB API client or a MySQL Server attached to the cluster).

在client_node_id和client_block_ref中,client指的是ndb集群api或sql节点(即ndb api客户端或连接到集群的mysql服务器)。

The block_instance column provides the DBTC kernel block instance number. You can use this to obtain information about specific threads from the threadblocks table.

block_instance列提供dbtc内核块实例号。可以使用此命令从threadblocks表中获取有关特定线程的信息。

21.5.10.36 The ndbinfo table_distribution_status Table

The table_distribution_status table provides information about the progress of table distribution for NDB tables.

table_distribution_status表提供有关ndb表的表分布进度的信息。

The following table provides information about the columns in table_distribution_status. For each column, the table shows the name, data type, and a brief description.

下表提供了有关处于表分布状态的列的信息。对于每一列,该表显示名称、数据类型和简要说明。

Table 21.394 Columns of the table_distribution_status table

表21.394表分布状态表列

Column Name Type Description
node_id integer Node id
table_id integer Table ID
tab_copy_status string Status of copying of table distribution data to disk; one of IDLE, SR_PHASE1_READ_PAGES, SR_PHASE2_READ_TABLE, SR_PHASE3_COPY_TABLE, REMOVE_NODE, LCP_READ_TABLE, COPY_TAB_REQ, COPY_NODE_STATE, ADD_TABLE_MASTER, ADD_TABLE_SLAVE, INVALIDATE_NODE_LCP, ALTER_TABLE, COPY_TO_SAVE, or GET_TABINFO
tab_update_status string Status of updating of table distribution data; one of IDLE, LOCAL_CHECKPOINT, LOCAL_CHECKPOINT_QUEUED, REMOVE_NODE, COPY_TAB_REQ, ADD_TABLE_MASTER, ADD_TABLE_SLAVE, INVALIDATE_NODE_LCP, or CALLBACK
tab_lcp_status string Status of table LCP; one of ACTIVE (waiting for local checkpoint to be performed), WRITING_TO_FILE (checkpoint performed but not yet written to disk), or COMPLETED (checkpoint performed and persisted to disk)
tab_status string Table internal status; one of ACTIVE (table exists), CREATING (table is being created), or DROPPING (table is being dropped)
tab_storage string Table recoverability; one of NORMAL (fully recoverable with redo logging and checkpointing), NOLOGGING (recoverable from node crash, empty following cluster crash), or TEMPORARY (not recoverable)
tab_partitions integer Number of partitions in table
tab_fragments integer Number of fragments in table; normally same as tab_partitions; for fully replicated tables equal to tab_partitions * [number of node groups]
current_scan_count integer Current number of active scans
scan_count_wait integer Current number of scans waiting to be performed before ALTER TABLE can complete.
is_reorg_ongoing integer Whether table is currently being reorganized (1 if true)

The table_distribution_status table was added in NDB 7.5.4.

在ndb 7.5.4中添加了table_distribution_status表。

21.5.10.37 The ndbinfo table_fragments Table

The table_fragments table provides information about the fragmentation, partitioning, distribution, and (internal) replication of NDB tables.

table_fragments表提供有关ndb表的碎片、分区、分布和(内部)复制的信息。

The following table provides information about the columns in table_fragments. For each column, the table shows the name, data type, and a brief description.

下表提供了有关表碎片中列的信息。对于每一列,该表显示名称、数据类型和简要说明。

Table 21.395 Columns of the table_fragments table

表21.395表列_片段表

Column Name Type Description
node_id integer Node ID (DIH master)
table_id integer Table ID
partition_id integer Partition ID
fragment_id integer Fragment ID (same as partition ID unless table is fully replicated)
partition_order integer Order of fragment in partition
log_part_id integer Log part ID of fragment
no_of_replicas integer Number of replicas
current_primary integer Current primary node ID
preferred_primary integer Preferred primary node ID
current_first_backup integer Current first backup node ID
current_second_backup integer Current second backup node ID
current_third_backup integer Current third backup node ID
num_alive_replicas integer Current number of live replicas
num_dead_replicas integer Current number of dead replicas
num_lcp_replicas integer Number of replicas remaining to be checkpointed

The table_fragments table was added in NDB 7.5.4.

在ndb 7.5.4中添加了table_fragments表。

21.5.10.38 The ndbinfo table_info Table

The table_info table provides information about logging, checkpointing, distribution, and storage options in effect for individual NDB tables.

table_info表提供有关日志记录、检查点、分发和存储选项的信息,这些选项对各个ndb表有效。

The following table provides information about the columns in table_info. For each column, the table shows the name, data type, and a brief description.

下表提供了有关“表信息”中列的信息。对于每一列,该表显示名称、数据类型和简要说明。

Table 21.396 Columns of the table_info table

表21.396表的列

Column Name Type Description
table_id integer Table ID
logged_table integer Whether table is logged (1) or not (0)
row_contains_gci integer Whether table rows contain GCI (1 true, 0 false)
row_contains_checksum integer Whether table rows contain checksum (1 true, 0 false)
read_backup integer If backup replicas are read this is 1, otherwise 0
fully_replicated integer If table is fully replicated this is 1, otherwise 0
storage_type string Table storage type; one of MEMORY or DISK
hashmap_id integer Hashmap ID
partition_balance string Partition balance (fragment count type) used for table; one of FOR_RP_BY_NODE, FOR_RA_BY_NODE, FOR_RP_BY_LDM, or FOR_RA_BY_LDM
create_gci integer GCI in which table was created

The table_info table was added in NDB 7.5.4.

在ndb 7.5.4中添加了table_info表。

21.5.10.39 The ndbinfo table_replicas Table

The table_replicas table provides information about the copying, distribution, and checkpointing of NDB table fragments and fragment replicas.

table_replicas表提供有关ndb表片段和片段副本的复制、分发和检查点的信息。

The following table provides information about the columns in table_replicas. For each column, the table shows the name, data type, and a brief description.

下表提供了有关表复制副本中列的信息。对于每一列,该表显示名称、数据类型和简要说明。

Table 21.397 Columns of the table_replicas table

表21.397表的列

Column Name Type Description
node_id integer ID of the node from which data is fetched (DIH master)
table_id integer Table ID
fragment_id integer Fragment ID
initial_gci integer Initial GCI for table
replica_node_id integer ID of node where replica is stored
is_lcp_ongoing integer Is 1 if LCP is ongoing on this fragment, 0 otherwise
num_crashed_replicas integer Number of crashed replica instances
last_max_gci_started integer Highest GCI started in most recent LCP
last_max_gci_completed integer Highest GCI completed in most recent LCP
last_lcp_id integer ID of most recent LCP
prev_lcp_id integer ID of previous LCP
prev_max_gci_started integer Highest GCI started in previous LCP
prev_max_gci_completed integer Highest GCI completed in previous LCP
last_create_gci integer Last Create GCI of last crashed replica instance
last_replica_gci integer Last GCI of last crashed replica instance
is_replica_alive integer 1 if this replica is alive, 0 otherwise

The table_replicas table was added in NDB 7.5.4.

在ndb 7.5.4中添加了table_replicas表。

21.5.10.40 The ndbinfo tc_time_track_stats Table

The tc_time_track_stats table provides time-tracking information obtained from the DBTC block (TC) instances in the data nodes, through API nodes access NDB. Each TC instance tracks latencies for a set of activities it undertakes on behalf of API nodes or other data nodes; these activities include transactions, transaction errors, key reads, key writes, unique index operations, failed key operations of any type, scans, failed scans, fragment scans, and failed fragment scans.

tc_time_track_stats表通过api节点访问ndb,提供从数据节点中的dbtc块(tc)实例获得的时间跟踪信息。每个tc实例跟踪它代表api节点或其他数据节点执行的一组活动的延迟;这些活动包括事务、事务错误、密钥读取、密钥写入、唯一索引操作、任何类型的失败密钥操作、扫描、失败扫描、片段扫描和失败片段扫描。

A set of counters is maintained for each activity, each counter covering a range of latencies less than or equal to an upper bound. At the conclusion of each activity, its latency is determined and the appropriate counter incremented. tc_time_track_stats presents this information as rows, with a row for each instance of the following:

为每个活动维护一组计数器,每个计数器覆盖的延迟范围小于或等于上限。在每个活动结束时,确定其延迟并增加适当的计数器。tc_time_track_stats将此信息显示为行,并为以下每个实例显示一行:

  • Data node, using its ID

    数据节点,使用其ID

  • TC block instance

    TC块实例

  • Other communicating data node or API node, using its ID

    其他通信数据节点或api节点,使用其id

  • Upper bound value

    上限值

Each row contains a value for each activity type. This is the number of times that this activity occurred with a latency within the range specified by the row (that is, where the latency does not exceed the upper bound).

每行包含每种活动类型的值。这是此活动在行指定的范围内(即延迟不超过上限)发生延迟的次数。

The following table provides information about the columns in tc_time_track_stats. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关tc_time_track_stats中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.398 Columns of the tc_time_track_stats table

表21.398 tc_time_track_stats表的列

Column Name Type Description
node_id integer Requesting node ID
block_number integer TC block number
block_instance integer TC block instance number
comm_node_id integer Node ID of communicating API or data node
upper_bound integer Upper bound of interval (in microseconds)
scans integer Based on duration of successful scans from opening to closing, tracked against the API or data nodes requesting them.
scan_errors integer Based on duration of failed scans from opening to closing, tracked against the API or data nodes requesting them.
scan_fragments integer Based on duration of successful fragment scans from opening to closing, tracked against the data nodes executing them
scan_fragment_errors integer Based on duration of failed fragment scans from opening to closing, tracked against the data nodes executing them
transactions integer Based on duration of successful transactions from beginning until sending of commit ACK, tracked against the API or data nodes requesting them. Stateless transactions are not included.
transaction_errors integer Based on duration of failing transactions from start to point of failure, tracked against the API or data nodes requesting them.
read_key_ops integer Based on duration of successful primary key reads with locks. Tracked against both the API or data node requesting them and the data node executing them.
write_key_ops integer Based on duration of successful primary key writes, tracked against both the API or data node requesting them and the data node executing them.
index_key_ops integer Based on duration of successful unique index key operations, tracked against both the API or data node requesting them and the data node executing reads of base tables.
key_op_errors integer Based on duration of all unsuccessful key read or write operations, tracked against both the API or data node requesting them and the data node executing them.

The block_instance column provides the DBTC kernel block instance number. You can use this together with the block name to obtain information about specific threads from the threadblocks table.

block_instance列提供dbtc内核块实例号。可以将其与块名一起使用,从threadblocks表中获取有关特定线程的信息。

The tc_time_track_stats table was added in NDB 4.7.9 (Bug #78533, Bug #21889652).

在ndb 4.7.9中添加了tc_time_track_stats表(错误78533,错误21889652)。

21.5.10.41 The ndbinfo threadblocks Table

The threadblocks table associates data nodes, threads, and instances of NDB kernel blocks.

threadblocks表关联ndb内核块的数据节点、线程和实例。

The following table provides information about the columns in the threadblocks table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关threadblocks表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.399 Columns of the threadblocks table

表21.399螺纹块表的列

Column Name Type Description
node_id integer Node ID
thr_no integer Thread ID
block_name string Block name
block_instance integer Block instance number

The value of the block_name in this table is one of the values found in the block_name column when selecting from the ndbinfo.blocks table. Although the list of possible values is static for a given NDB Cluster release, the list may vary between releases.

此表中块名的值是从ndbinfo.blocks表中选择时在块名列中找到的值之一。尽管对于给定的ndb集群版本,可能值的列表是静态的,但列表可能在不同的版本之间有所不同。

The block_instance column provides the kernel block instance number.

block_instance列提供内核块实例号。

21.5.10.42 The ndbinfo threads Table

The threads table provides information about threads running in the NDB kernel.

threads表提供了有关在ndb内核中运行的线程的信息。

The following table provides information about the columns in the threads table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关threads表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.400 Columns of the threads table

表21.400螺纹表列

Column Name Type Description
node_id integer ID of the node where the thread is running
thr_no integer Thread ID (specific to this node)
thread_name string Thread name (type of thread)
thread_description string Thread (type) description

Sample output from a 2-node example cluster, including thread descriptions, is shown here:

来自2节点示例集群的示例输出(包括线程描述)如下所示:

mysql> SELECT * FROM threads;
+---------+--------+-------------+------------------------------------------------------------------+
| node_id | thr_no | thread_name | thread_description                                               |
+---------+--------+-------------+------------------------------------------------------------------+
|       5 |      0 | main        | main thread, schema and distribution handling                    |
|       5 |      1 | rep         | rep thread, asynch replication and proxy block handling          |
|       5 |      2 | ldm         | ldm thread, handling a set of data partitions                    |
|       5 |      3 | recv        | receive thread, performing receive and polling for new receives |
|       6 |      0 | main        | main thread, schema and distribution handling                    |
|       6 |      1 | rep         | rep thread, asynch replication and proxy block handling          |
|       6 |      2 | ldm         | ldm thread, handling a set of data partitions                    |
|       6 |      3 | recv        | receive thread, performing receive and polling for new receives |
+---------+--------+-------------+------------------------------------------------------------------+
8 rows in set (0.01 sec)

This table was added in NDB 7.5.2.

此表是在ndb 7.5.2中添加的。

21.5.10.43 The ndbinfo threadstat Table

The threadstat table provides a rough snapshot of statistics for threads running in the NDB kernel.

threadstat表为在ndb内核中运行的线程提供了一个粗略的统计快照。

The following table provides information about the columns in the threadstat table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关threadstat表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.401 Columns of the threadstat table

ThreadStat表的表21.401列

Column Name Type Description
node_id integer Node ID
thr_no integer Thread ID
thr_nm string Thread name
c_loop string Number of loops in main loop
c_exec string Number of signals executed
c_wait string Number of times waiting for additional input
c_l_sent_prioa integer Number of priority A signals sent to own node
c_l_sent_priob integer Number of priority B signals sent to own node
c_r_sent_prioa integer Number of priority A signals sent to remote node
c_r_sent_priob integer Number of priority B signals sent to remote node
os_tid integer OS thread ID
os_now integer OS time (ms)
os_ru_utime integer OS user CPU time (µs)
os_ru_stime integer OS system CPU time (µs)
os_ru_minflt integer OS page reclaims (soft page faults)
os_ru_majflt integer OS page faults (hard page faults)
os_ru_nvcsw integer OS voluntary context switches
os_ru_nivcsw integer OS involuntary context switches

os_time uses the system gettimeofday() call.

操作系统时间使用系统getTimeOfDay()调用。

The values of the os_ru_utime, os_ru_stime, os_ru_minflt, os_ru_majflt, os_ru_nvcsw, and os_ru_nivcsw columns are obtained using the system getrusage() call, or the equivalent.

os_ru_utime、os_ru_stime、os_ru_minflt、os_ru_majflt、os_ru_nvcsw和os_ru_nivcsw列的值是使用系统getrusage()调用或等效调用获得的。

Since this table contains counts taken at a given point in time, for best results it is necessary to query this table periodically and store the results in an intermediate table or tables. The MySQL Server's Event Scheduler can be employed to automate such monitoring. For more information, see Section 23.4, “Using the Event Scheduler”.

由于此表包含在给定时间点获取的计数,为了获得最佳结果,必须定期查询此表并将结果存储在一个或多个中间表中。mysql服务器的事件调度程序可用于自动执行此类监视。有关更多信息,请参阅第23.4节“使用事件调度程序”。

21.5.10.44 The ndbinfo transporters Table

This table contains information about NDB transporters.

此表包含有关ndb传输程序的信息。

The following table provides information about the columns in the transporters table. For each column, the table shows the name, data type, and a brief description. Additional information can be found in the notes following the table.

下表提供了有关transporters表中列的信息。对于每一列,该表显示名称、数据类型和简要说明。其他信息可在下表的注释中找到。

Table 21.402 Columns of the transporters table

表21.402 Transporters表的列

Column Name Type Description
node_id integer This data node's unique node ID in the cluster
remote_node_id integer The remote data node's node ID
status string Status of the connection
remote_address string Name or IP address of the remote host
bytes_sent integer Number of bytes sent using this connection
bytes_received integer Number of bytes received using this connection
connect_count integer Number of times connection established on this transporter
overloaded boolean (0 or 1) 1 if this transporter is currently overloaded, otherwise 0
overload_count integer Number of times this transporter has entered overload state since connecting
slowdown boolean (0 or 1) 1 if this transporter is in slowdown state, otherwise 0
slowdown_count integer Number of times this transporter has entered slowdown state since connecting

For each running data node in the cluster, the transporters table displays a row showing the status of each of that node's connections with all nodes in the cluster, including itself. This information is shown in the table's status column, which can have any one of the following values: CONNECTING, CONNECTED, DISCONNECTING, or DISCONNECTED.

对于集群中每个正在运行的数据节点,transporters表显示一行,显示该节点与集群中所有节点(包括自身)的每个连接的状态。此信息显示在表的“状态”列中,该列可以具有以下任意值:连接、连接、断开连接或断开连接。

Connections to API and management nodes which are configured but not currently connected to the cluster are shown with status DISCONNECTED. Rows where the node_id is that of a data node which is not currently connected are not shown in this table. (This is similar omission of disconnected nodes in the ndbinfo.nodes table.

与已配置但当前未连接到群集的API和管理节点的连接显示为“已断开连接”。此表中不显示当前未连接的数据节点的节点ID所在的行。(这类似于在ndbinfo.nodes表中省略断开连接的节点。

The remote_address is the host name or address for the node whose ID is shown in the remote_node_id column. The bytes_sent from this node and bytes_received by this node are the numbers, respectively, of bytes sent and received by the node using this connection since it was established. For nodes whose status is CONNECTING or DISCONNECTED, these columns always display 0.

remote_address是在remote_node_id列中显示其id的节点的主机名或地址。从该节点发送的bytes_和该节点接收的bytes_分别是自建立连接以来使用该连接的节点发送和接收的字节数。对于状态为“正在连接”或“已断开连接”的节点,这些列始终显示0。

Assume you have a 5-node cluster consisting of 2 data nodes, 2 SQL nodes, and 1 management node, as shown in the output of the SHOW command in the ndb_mgm client:

假设您有一个由2个数据节点、2个sql节点和1个管理节点组成的5节点集群,如ndb-mgm客户端中show命令的输出所示:

ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @10.100.10.1  (5.7.28-ndb-7.5.16, Nodegroup: 0, *)
id=2    @10.100.10.2  (5.7.28-ndb-7.5.16, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=10   @10.100.10.10  (5.7.28-ndb-7.5.16)

[mysqld(API)]   2 node(s)
id=20   @10.100.10.20  (5.7.28-ndb-7.5.16)
id=21   @10.100.10.21  (5.7.28-ndb-7.5.16)

There are 10 rows in the transporters table—5 for the first data node, and 5 for the second—assuming that all data nodes are running, as shown here:

transporters表5中有10行用于第一个数据节点,5行用于第二个数据节点,假设所有数据节点都在运行,如下所示:

mysql> SELECT node_id, remote_node_id, status
    ->   FROM ndbinfo.transporters;
+---------+----------------+---------------+
| node_id | remote_node_id | status        |
+---------+----------------+---------------+
|       1 |              1 | DISCONNECTED  |
|       1 |              2 | CONNECTED     |
|       1 |             10 | CONNECTED     |
|       1 |             20 | CONNECTED     |
|       1 |             21 | CONNECTED     |
|       2 |              1 | CONNECTED     |
|       2 |              2 | DISCONNECTED  |
|       2 |             10 | CONNECTED     |
|       2 |             20 | CONNECTED     |
|       2 |             21 | CONNECTED     |
+---------+----------------+---------------+
10 rows in set (0.04 sec)

If you shut down one of the data nodes in this cluster using the command 2 STOP in the ndb_mgm client, then repeat the previous query (again using the mysql client), this table now shows only 5 rows—1 row for each connection from the remaining management node to another node, including both itself and the data node that is currently offline—and displays CONNECTING for the status of each remaining connection to the data node that is currently offline, as shown here:

如果使用ndb-mgm客户机中的命令2 stop关闭此群集中的一个数据节点,然后重复上一个查询(再次使用mysql客户机),则此表现在只显示5行,即剩余管理节点到另一个节点的每个连接显示1行,包括自身和当前脱机的数据节点,并显示连接到当前脱机的数据节点的每个剩余连接的状态,如下所示:

mysql> SELECT node_id, remote_node_id, status
    ->   FROM ndbinfo.transporters;
+---------+----------------+---------------+
| node_id | remote_node_id | status        |
+---------+----------------+---------------+
|       1 |              1 | DISCONNECTED  |
|       1 |              2 | CONNECTING    |
|       1 |             10 | CONNECTED     |
|       1 |             20 | CONNECTED     |
|       1 |             21 | CONNECTED     |
+---------+----------------+---------------+
5 rows in set (0.02 sec)

The connect_count, overloaded, overload_count, slowdown, and slowdown_count counters are reset on connection, and retain their values after the remote node disconnects. The bytes_sent and bytes_received counters are also reset on connection, and so retain their values following disconnection (until the next connection resets them).

连接计数器、重载计数器、重载计数器、减速计数器和减速计数器在连接时重置,并在远程节点断开连接后保留其值。bytes_sent和bytes_received计数器也在连接时重置,因此在断开连接后保留它们的值(直到下一个连接重置它们)。

The overload state referred to by the overloaded and overload_count columns occurs when this transporter's send buffer contains more than OVerloadLimit bytes (default is 80% of SendBufferMemory, that is, 0.8 * 2097152 = 1677721 bytes). When a given transporter is in a state of overload, any new transaction that tries to use this transporter fails with Error 1218 (Send Buffers overloaded in NDB kernel). This affects both scans and primary key operations.

当此传输程序的发送缓冲区包含超过重载限制字节(默认值为SendBufferMemory的80%,即0.8*2097152=1677721字节)时,重载和重载计数列引用的重载状态发生。当给定的传输程序处于过载状态时,尝试使用此传输程序的任何新事务都将失败,并出现错误1218(在ndb内核中发送缓冲区过载)。这会影响扫描和主键操作。

The slowdown state referenced by the slowdown and slowdown_count columns of this table occurs when the transporter's send buffer contains more than 60% of the overload limit (equal to 0.6 * 2097152 = 1258291 bytes by default). In this state, any new scan using this transporter has its batch size reduced to minimize the load on the transporter.

当传输程序的发送缓冲区包含超过60%的过载限制(默认等于0.6*2097152=1258291字节)时,此表的“减速”和“减速计数”列引用的减速状态出现。在这种状态下,使用这种传送带的任何新扫描都会减小其批处理大小,以使传送带上的负载最小化。

Common causes of send buffer slowdown or overloading include the following:

发送缓冲区减速或过载的常见原因包括:

  • Data size, in particular the quantity of data stored in TEXT columns or BLOB columns (or both types of columns)

    数据大小,特别是存储在文本列或blob列(或两种类型的列)中的数据量

  • Having a data node (ndbd or ndbmtd) on the same host as an SQL node that is engaged in binary logging

    在同一主机上有一个数据节点(ndbd或ndbmtd)作为一个sql节点进行二进制日志记录

  • Large number of rows per transaction or transaction batch

    每个事务或事务批处理有大量行

  • Configuration issues such as insufficient SendBufferMemory

    配置问题,例如sendbuffermemory不足

  • Hardware issues such as insufficient RAM or poor network connectivity

    硬件问题,如内存不足或网络连接不良

See also Section 21.3.3.13, “Configuring NDB Cluster Send Buffer Parameters”.

另请参阅第21.3.3.13节“配置ndb集群发送缓冲区参数”。

21.5.11 INFORMATION_SCHEMA Tables for NDB Cluster

Two INFORMATION_SCHEMA tables provide information that is of particular use when managing an NDB Cluster . The FILES table provides information about NDB Cluster Disk Data files. The ndb_transid_mysql_connection_map table provides a mapping between transactions, transaction coordinators, and API nodes.

两个information_schema表提供了在管理ndb集群时特别有用的信息。文件表提供有关ndb群集磁盘数据文件的信息。ndb_transid_mysql_connection_映射表提供事务、事务协调器和api节点之间的映射。

Additional statistical and other data about NDB Cluster transactions, operations, threads, blocks, and other aspects of performance can be obtained from the tables in the ndbinfo database. For information about these tables, see Section 21.5.10, “ndbinfo: The NDB Cluster Information Database”.

有关ndb集群事务、操作、线程、块和其他性能方面的其他统计和其他数据可以从ndbinfo数据库中的表中获得。有关这些表的信息,请参阅21.5.10节,“ndbinfo:ndb集群信息数据库”。

21.5.12 NDB Cluster Security Issues

This section discusses security considerations to take into account when setting up and running NDB Cluster.

本节讨论在设置和运行ndb集群时要考虑的安全因素。

Topics covered in this section include the following:

本节涵盖的主题包括:

  • NDB Cluster and network security issues

    ndb集群和网络安全问题

  • Configuration issues relating to running NDB Cluster securely

    与安全运行ndb集群相关的配置问题

  • NDB Cluster and the MySQL privilege system

    ndb集群与mysql权限系统

  • MySQL standard security procedures as applicable to NDB Cluster

    适用于ndb集群的mysql标准安全过程

21.5.12.1 NDB Cluster Security and Networking Issues

In this section, we discuss basic network security issues as they relate to NDB Cluster. It is extremely important to remember that NDB Cluster out of the box is not secure; you or your network administrator must take the proper steps to ensure that your cluster cannot be compromised over the network.

在本节中,我们将讨论与ndb集群相关的基本网络安全问题。请务必记住,ndb群集“开箱即用”是不安全的;您或您的网络管理员必须采取适当的步骤,以确保您的群集不会在网络上受到危害。

Cluster communication protocols are inherently insecure, and no encryption or similar security measures are used in communications between nodes in the cluster. Because network speed and latency have a direct impact on the cluster's efficiency, it is also not advisable to employ SSL or other encryption to network connections between nodes, as such schemes will effectively slow communications.

集群通信协议本质上是不安全的,在集群中节点之间的通信中不使用加密或类似的安全措施。由于网络速度和延迟直接影响集群的效率,因此也不建议在节点之间使用ssl或其他加密到网络的连接,因为这样的方案将有效地减慢通信速度。

It is also true that no authentication is used for controlling API node access to an NDB Cluster. As with encryption, the overhead of imposing authentication requirements would have an adverse impact on Cluster performance.

同样,没有身份验证用于控制对ndb集群的api节点访问。与加密一样,强制认证要求的开销将对集群性能产生不利影响。

In addition, there is no checking of the source IP address for either of the following when accessing the cluster:

此外,在访问群集时,不检查以下任一项的源IP地址:

  • SQL or API nodes using free slots created by empty [mysqld] or [api] sections in the config.ini file

    使用config.ini文件中空[mysqld]或[api]节创建的“空闲插槽”的sql或api节点

    This means that, if there are any empty [mysqld] or [api] sections in the config.ini file, then any API nodes (including SQL nodes) that know the management server's host name (or IP address) and port can connect to the cluster and access its data without restriction. (See Section 21.5.12.2, “NDB Cluster and MySQL Privileges”, for more information about this and related issues.)

    这意味着,如果config.ini文件中有任何空的[mysqld]或[api]节,那么任何知道管理服务器的主机名(或IP地址)和端口的API节点(包括SQL节点)都可以连接到集群并无限制地访问其数据。(有关此问题和相关问题的详细信息,请参阅第21.5.12.2节“ndb cluster和mysql特权”。)

    Note

    You can exercise some control over SQL and API node access to the cluster by specifying a HostName parameter for all [mysqld] and [api] sections in the config.ini file. However, this also means that, should you wish to connect an API node to the cluster from a previously unused host, you need to add an [api] section containing its host name to the config.ini file.

    通过为config.ini文件中的所有[mysqld]和[api]部分指定主机名参数,可以对SQL和API节点对集群的访问进行一些控制。但是,这也意味着,如果希望从以前未使用的主机将api节点连接到集群,则需要将包含其主机名的[api]节添加到config.ini文件中。

    More information is available elsewhere in this chapter about the HostName parameter. Also see Section 21.3.1, “Quick Test Setup of NDB Cluster”, for configuration examples using HostName with API nodes.

    有关hostname参数的更多信息,请参阅本章其他部分。另请参阅21.3.1节“ndb集群的快速测试设置”,了解使用主机名和api节点的配置示例。

  • Any ndb_mgm client

    任何ndb-mgm客户

    This means that any cluster management client that is given the management server's host name (or IP address) and port (if not the standard port) can connect to the cluster and execute any management client command. This includes commands such as ALL STOP and SHUTDOWN.

    这意味着给定管理服务器的主机名(或IP地址)和端口(如果不是标准端口)的任何群集管理客户端都可以连接到群集并执行任何管理客户端命令。这包括所有停止和关闭命令。

For these reasons, it is necessary to protect the cluster on the network level. The safest network configuration for Cluster is one which isolates connections between Cluster nodes from any other network communications. This can be accomplished by any of the following methods:

因此,有必要在网络级别上保护集群。群集最安全的网络配置是将群集节点之间的连接与任何其他网络通信隔离开来。这可以通过以下任一方法实现:

  1. Keeping Cluster nodes on a network that is physically separate from any public networks. This option is the most dependable; however, it is the most expensive to implement.

    在物理上与任何公共网络分离的网络上保留群集节点。这个选项是最可靠的;但是,它是最昂贵的实现。

    We show an example of an NDB Cluster setup using such a physically segregated network here:

    我们在这里展示了使用这种物理隔离网络的ndb集群设置示例:

    Figure 21.39 NDB Cluster with Hardware Firewall

    图21.39带硬件防火墙的ndb集群

    Content is described in the surrounding text.

    This setup has two networks, one private (solid box) for the Cluster management servers and data nodes, and one public (dotted box) where the SQL nodes reside. (We show the management and data nodes connected using a gigabit switch since this provides the best performance.) Both networks are protected from the outside by a hardware firewall, sometimes also known as a network-based firewall.

    此设置有两个网络,一个用于群集管理服务器和数据节点的专用(实心框),一个用于SQL节点所在的公用(虚线框)。(我们展示了使用千兆位交换机连接的管理和数据节点,因为这提供了最好的性能。)两个网络都由硬件防火墙保护,有时也称为基于网络的防火墙。

    This network setup is safest because no packets can reach the cluster's management or data nodes from outside the network—and none of the cluster's internal communications can reach the outside—without going through the SQL nodes, as long as the SQL nodes do not permit any packets to be forwarded. This means, of course, that all SQL nodes must be secured against hacking attempts.

    此网络设置最安全,因为没有数据包可以从网络外部到达群集的管理或数据节点,并且只要SQL节点不允许转发任何数据包,群集的任何内部通信都不能通过SQL节点到达外部。当然,这意味着必须保护所有sql节点不受黑客攻击。

    Important

    With regard to potential security vulnerabilities, an SQL node is no different from any other MySQL server. See Section 6.1.3, “Making MySQL Secure Against Attackers”, for a description of techniques you can use to secure MySQL servers.

    关于潜在的安全漏洞,sql节点与其他mysql服务器没有区别。请参阅6.1.3节“使mysql安全抵御攻击者”,了解可用于保护mysql服务器的技术说明。

  2. Using one or more software firewalls (also known as host-based firewalls) to control which packets pass through to the cluster from portions of the network that do not require access to it. In this type of setup, a software firewall must be installed on every host in the cluster which might otherwise be accessible from outside the local network.

    使用一个或多个软件防火墙(也称为基于主机的防火墙)控制哪些数据包从不需要访问的网络部分传递到群集。在这种类型的设置中,必须在群集中的每个主机上安装软件防火墙,否则可能会从本地网络外部访问该防火墙。

    The host-based option is the least expensive to implement, but relies purely on software to provide protection and so is the most difficult to keep secure.

    基于主机的选项实现起来成本最低,但完全依赖软件提供保护,因此最难保证安全。

    This type of network setup for NDB Cluster is illustrated here:

    ndb集群的这种网络设置如下所示:

    Figure 21.40 NDB Cluster with Software Firewalls

    图21.40带有软件防火墙的ndb集群

    Content is described in the surrounding text.

    Using this type of network setup means that there are two zones of NDB Cluster hosts. Each cluster host must be able to communicate with all of the other machines in the cluster, but only those hosting SQL nodes (dotted box) can be permitted to have any contact with the outside, while those in the zone containing the data nodes and management nodes (solid box) must be isolated from any machines that are not part of the cluster. Applications using the cluster and user of those applications must not be permitted to have direct access to the management and data node hosts.

    使用这种类型的网络设置意味着ndb群集主机有两个区域。每个群集主机必须能够与群集中的所有其他计算机通信,但只能允许托管SQL节点(虚线框)的计算机与外部进行任何联系,而包含数据节点和管理节点(实线框)的区域中的计算机必须与不属于群集的任何计算机隔离。不允许使用这些应用程序的群集和用户的应用程序直接访问管理和数据节点主机。

    To accomplish this, you must set up software firewalls that limit the traffic to the type or types shown in the following table, according to the type of node that is running on each cluster host computer:

    为此,必须根据每个群集主机上运行的节点类型,设置软件防火墙,将流量限制为下表所示的一种或多种类型:

    Table 21.403 Node types in a host-based firewall cluster configuration

    表21.403基于主机的防火墙群集配置中的节点类型

    Node Type Permitted Traffic
    SQL or API node
    • It originates from the IP address of a management or data node (using any TCP or UDP port).

      它源自管理或数据节点的IP地址(使用任何TCP或UDP端口)。

    • It originates from within the network in which the cluster resides and is on the port that your application is using.

      它起源于群集所在的网络内,位于应用程序正在使用的端口上。

    Data node or Management node
    • It originates from the IP address of a management or data node (using any TCP or UDP port).

      它源自管理或数据节点的IP地址(使用任何TCP或UDP端口)。

    • It originates from the IP address of an SQL or API node.

      它源自SQL或API节点的IP地址。


    Any traffic other than that shown in the table for a given node type should be denied.

    除了表中显示的给定节点类型的流量外,应拒绝任何其他流量。

    The specifics of configuring a firewall vary from firewall application to firewall application, and are beyond the scope of this Manual. iptables is a very common and reliable firewall application, which is often used with APF as a front end to make configuration easier. You can (and should) consult the documentation for the software firewall that you employ, should you choose to implement an NDB Cluster network setup of this type, or of a mixed type as discussed under the next item.

    配置防火墙的细节因防火墙应用程序而异,超出了本手册的范围。iptables是一个非常常见和可靠的防火墙应用程序,它经常与apf一起用作前端,以简化配置。如果您选择实现这种类型的ndb群集网络设置,或者选择下一项中讨论的“混合”类型,则可以(而且应该)查阅所使用的软件防火墙的文档。

  3. It is also possible to employ a combination of the first two methods, using both hardware and software to secure the cluster—that is, using both network-based and host-based firewalls. This is between the first two schemes in terms of both security level and cost. This type of network setup keeps the cluster behind the hardware firewall, but permits incoming packets to travel beyond the router connecting all cluster hosts to reach the SQL nodes.

    还可以采用前两种方法的组合,使用硬件和软件来保护集群,即使用基于网络的防火墙和基于主机的防火墙。在安全级别和成本方面,这介于前两个方案之间。这种类型的网络设置将集群保留在硬件防火墙后面,但允许传入的数据包通过连接所有集群主机的路由器到达sql节点。

    One possible network deployment of an NDB Cluster using hardware and software firewalls in combination is shown here:

    结合使用硬件和软件防火墙的ndb集群的一种可能的网络部署如下所示:

    Figure 21.41 NDB Cluster with a Combination of Hardware and Software Firewalls

    图21.41具有硬件和软件防火墙组合的ndb集群

    Content is described in the surrounding text.

    In this case, you can set the rules in the hardware firewall to deny any external traffic except to SQL nodes and API nodes, and then permit traffic to them only on the ports required by your application.

    在这种情况下,可以在硬件防火墙中设置规则,以拒绝除SQL节点和API节点以外的任何外部通信,然后仅允许在应用程序所需的端口上向它们传输通信。

Whatever network configuration you use, remember that your objective from the viewpoint of keeping the cluster secure remains the same—to prevent any unessential traffic from reaching the cluster while ensuring the most efficient communication between the nodes in the cluster.

无论使用何种网络配置,请记住,从保持群集安全的角度来看,您的目标保持不变,以防止任何不必要的通信流到达群集,同时确保群集中节点之间的通信最有效。

Because NDB Cluster requires large numbers of ports to be open for communications between nodes, the recommended option is to use a segregated network. This represents the simplest way to prevent unwanted traffic from reaching the cluster.

由于ndb集群需要打开大量端口以便在节点之间进行通信,建议使用隔离网络。这是防止不需要的通信流到达集群的最简单方法。

Note

If you wish to administer an NDB Cluster remotely (that is, from outside the local network), the recommended way to do this is to use ssh or another secure login shell to access an SQL node host. From this host, you can then run the management client to access the management server safely, from within the Cluster's own local network.

如果希望远程管理ndb集群(即从本地网络外部管理),建议使用ssh或其他安全登录shell访问sql节点主机。然后,您可以从该主机上运行管理客户端,从群集自己的本地网络中安全地访问管理服务器。

Even though it is possible to do so in theory, it is not recommended to use ndb_mgm to manage a Cluster directly from outside the local network on which the Cluster is running. Since neither authentication nor encryption takes place between the management client and the management server, this represents an extremely insecure means of managing the cluster, and is almost certain to be compromised sooner or later.

尽管理论上可以这样做,但不建议使用ndb-mgm直接从运行群集的本地网络外部管理群集。由于管理客户机和管理服务器之间既不进行身份验证也不进行加密,这代表了管理集群的一种极不安全的方式,而且几乎肯定迟早会受到损害。

21.5.12.2 NDB Cluster and MySQL Privileges

In this section, we discuss how the MySQL privilege system works in relation to NDB Cluster and the implications of this for keeping an NDB Cluster secure.

在本节中,我们将讨论mysql特权系统如何与ndb集群相关工作,以及这对保持ndb集群安全的影响。

Standard MySQL privileges apply to NDB Cluster tables. This includes all MySQL privilege types (SELECT privilege, UPDATE privilege, DELETE privilege, and so on) granted on the database, table, and column level. As with any other MySQL Server, user and privilege information is stored in the mysql system database. The SQL statements used to grant and revoke privileges on NDB tables, databases containing such tables, and columns within such tables are identical in all respects with the GRANT and REVOKE statements used in connection with database objects involving any (other) MySQL storage engine. The same thing is true with respect to the CREATE USER and DROP USER statements.

标准mysql特权适用于ndb集群表。这包括在数据库、表和列级别授予的所有mysql特权类型(select privilege、update privilege、delete privilege等)。与任何其他mysql服务器一样,用户和权限信息存储在mysql系统数据库中。用于授予和撤消对ndb表、包含此类表的数据库以及此类表中的列的权限的sql语句在所有方面都与用于涉及任何(其他)mysql存储引擎的数据库对象的grant和revoke语句完全相同。对于create user和drop user语句也是如此。

It is important to keep in mind that, by default, the MySQL grant tables use the MyISAM storage engine. Because of this, those tables are not normally duplicated or shared among MySQL servers acting as SQL nodes in an NDB Cluster. In other words, changes in users and their privileges do not automatically propagate between SQL nodes by default. If you wish, you can enable automatic distribution of MySQL users and privileges across NDB Cluster SQL nodes; see Section 21.5.16, “Distributed Privileges Using Shared Grant Tables”, for details.

必须记住,在默认情况下,mysql grant表使用myisam存储引擎。因此,这些表通常不会在充当ndb集群中的sql节点的mysql服务器之间复制或共享。换句话说,默认情况下,用户及其权限中的更改不会在SQL节点之间自动传播。如果愿意,可以启用跨ndb集群sql节点的mysql用户和权限的自动分发;有关详细信息,请参阅21.5.16节“使用共享授权表的分布式权限”。

Conversely, because there is no way in MySQL to deny privileges (privileges can either be revoked or not granted in the first place, but not denied as such), there is no special protection for NDB tables on one SQL node from users that have privileges on another SQL node; (This is true even if you are not using automatic distribution of user privileges. The definitive example of this is the MySQL root account, which can perform any action on any database object. In combination with empty [mysqld] or [api] sections of the config.ini file, this account can be especially dangerous. To understand why, consider the following scenario:

相反,由于mysql中没有拒绝特权的方法(特权可以首先被撤销或不被授予,但不能因此被拒绝),因此对于一个sql节点上的ndb表,对于在另一个sql节点上拥有特权的用户没有特殊的保护;(即使您没有使用用户权限的自动分发,也是如此。最明确的例子是mysql根帐户,它可以对任何数据库对象执行任何操作。与config.ini文件的空[mysqld]或[api]部分结合使用时,此帐户可能特别危险。要了解原因,请考虑以下场景:

  • The config.ini file contains at least one empty [mysqld] or [api] section. This means that the NDB Cluster management server performs no checking of the host from which a MySQL Server (or other API node) accesses the NDB Cluster.

    config.ini文件至少包含一个空的[mysqld]或[api]节。这意味着ndb集群管理服务器不检查mysql服务器(或其他api节点)访问ndb集群的主机。

  • There is no firewall, or the firewall fails to protect against access to the NDB Cluster from hosts external to the network.

    没有防火墙,或者防火墙无法防止网络外部主机访问ndb群集。

  • The host name or IP address of the NDB Cluster management server is known or can be determined from outside the network.

    ndb群集管理服务器的主机名或ip地址已知,或者可以从网络外部确定。

If these conditions are true, then anyone, anywhere can start a MySQL Server with --ndbcluster --ndb-connectstring=management_host and access this NDB Cluster. Using the MySQL root account, this person can then perform the following actions:

如果这些条件是真的,那么任何人、任何地方都可以用--ndb cluster--ndb connectstring=management_host启动mysql服务器并访问这个ndb集群。然后,此人可以使用mysql根帐户执行以下操作:

  • Execute metadata statements such as SHOW DATABASES statement (to obtain a list of all NDB databases on the server) or SHOW TABLES FROM some_ndb_database statement to obtain a list of all NDB tables in a given database

    执行元数据语句,如show databases语句(以获取服务器上所有ndb数据库的列表)或show tables from some ndb数据库语句,以获取给定数据库中所有ndb表的列表

  • Run any legal MySQL statements on any of the discovered tables, such as:

    对发现的任何表运行任何合法的mysql语句,例如:

    • SELECT * FROM some_table to read all the data from any table

      从某个表中选择*从任何表中读取所有数据

    • DELETE FROM some_table to delete all the data from a table

      delete from some_table可删除表中的所有数据

    • DESCRIBE some_table or SHOW CREATE TABLE some_table to determine the table schema

      描述一些表或显示创建表一些表以确定表架构

    • UPDATE some_table SET column1 = some_value to fill a table column with garbage data; this could actually cause much greater damage than simply deleting all the data

      更新some_table set column1=some_value以用“垃圾”数据填充表列;这实际上可能比简单地删除所有数据造成更大的损害

      More insidious variations might include statements like these:

      更隐秘的变化可能包括如下声明:

      UPDATE some_table SET an_int_column = an_int_column + 1
      

      or

      UPDATE some_table SET a_varchar_column = REVERSE(a_varchar_column)
      

      Such malicious statements are limited only by the imagination of the attacker.

      这种恶意的言论仅限于攻击者的想象。

    The only tables that would be safe from this sort of mayhem would be those tables that were created using storage engines other than NDB, and so not visible to a rogue SQL node.

    唯一可以避免这种混乱的表是那些使用ndb以外的存储引擎创建的表,因此对“rogue”sql节点不可见。

    A user who can log in as root can also access the INFORMATION_SCHEMA database and its tables, and so obtain information about databases, tables, stored routines, scheduled events, and any other database objects for which metadata is stored in INFORMATION_SCHEMA.

    可以以根用户身份登录的用户还可以访问信息架构数据库及其表,从而获取有关数据库、表、存储例程、计划事件以及元数据存储在信息架构中的任何其他数据库对象的信息。

    It is also a very good idea to use different passwords for the root accounts on different NDB Cluster SQL nodes unless you are using distributed privileges.

    对于不同ndb集群sql节点上的根帐户使用不同的密码也是一个非常好的主意,除非您使用的是分布式权限。

In sum, you cannot have a safe NDB Cluster if it is directly accessible from outside your local network.

总之,如果可以从本地网络外部直接访问安全的ndb群集,则无法使用它。

Important

Never leave the MySQL root account password empty. This is just as true when running MySQL as an NDB Cluster SQL node as it is when running it as a standalone (non-Cluster) MySQL Server, and should be done as part of the MySQL installation process before configuring the MySQL Server as an SQL node in an NDB Cluster.

不要让mysql根帐户密码为空。在将mysql作为ndb集群sql节点运行时,这与将其作为独立(非集群)mysql服务器运行时是一样的,并且应该在将mysql服务器配置为ndb集群中的sql节点之前,作为mysql安装过程的一部分来完成。

If you wish to employ NDB Cluster's distributed privilege capabilities, you should not simply convert the system tables in the mysql database to use the NDB storage engine manually. Use the stored procedure provided for this purpose instead; see Section 21.5.16, “Distributed Privileges Using Shared Grant Tables”.

如果希望使用ndb集群的分布式权限功能,则不应简单地将mysql数据库中的系统表转换为手动使用ndb存储引擎。请使用为此目的提供的存储过程;请参阅第21.5.16节“使用共享授权表的分布式权限”。

Otherwise, if you need to synchronize mysql system tables between SQL nodes, you can use standard MySQL replication to do so, or employ a script to copy table entries between the MySQL servers.

否则,如果需要在sql节点之间同步mysql系统表,可以使用标准的mysql复制,或者使用脚本在mysql服务器之间复制表条目。

Summary.  The most important points to remember regarding the MySQL privilege system with regard to NDB Cluster are listed here:

概要关于ndb集群的mysql特权系统,需要记住的最重要的一点如下:

  1. Users and privileges established on one SQL node do not automatically exist or take effect on other SQL nodes in the cluster. Conversely, removing a user or privilege on one SQL node in the cluster does not remove the user or privilege from any other SQL nodes.

    在一个SQL节点上建立的用户和特权不会自动存在或对群集中的其他SQL节点起作用。相反,删除群集中一个SQL节点上的用户或权限不会从任何其他SQL节点上删除该用户或权限。

  2. You can distribute MySQL users and privileges among SQL nodes using the SQL script, and the stored procedures it contains, that are supplied for this purpose in the NDB Cluster distribution.

    可以使用在ndb集群分发中为此目的提供的sql脚本及其包含的存储过程在sql节点之间分发mysql用户和权限。

  3. Once a MySQL user is granted privileges on an NDB table from one SQL node in an NDB Cluster, that user can see any data in that table regardless of the SQL node from which the data originated, even if you are not using privilege distribution.

    一旦从ndb集群中的一个sql节点授予mysql用户对ndb表的权限,该用户就可以“看到”该表中的任何数据,而不管数据来自哪个sql节点,即使您没有使用权限分发。

21.5.12.3 NDB Cluster and MySQL Security Procedures

In this section, we discuss MySQL standard security procedures as they apply to running NDB Cluster.

在本节中,我们将讨论mysql标准安全过程,因为它们适用于运行ndb集群。

In general, any standard procedure for running MySQL securely also applies to running a MySQL Server as part of an NDB Cluster. First and foremost, you should always run a MySQL Server as the mysql operating system user; this is no different from running MySQL in a standard (non-Cluster) environment. The mysql system account should be uniquely and clearly defined. Fortunately, this is the default behavior for a new MySQL installation. You can verify that the mysqld process is running as the mysql operating system user by using the system command such as the one shown here:

一般来说,安全运行mysql的任何标准过程也适用于作为ndb集群的一部分运行mysql服务器。首先,您应该始终以mysql操作系统用户的身份运行mysql服务器;这与在标准(非集群)环境中运行mysql没有区别。mysql系统帐户应该是唯一且明确定义的。幸运的是,这是新mysql安装的默认行为。您可以使用如下所示的系统命令来验证mysqld进程是否以mysql操作系统用户身份运行:

shell> ps aux | grep mysql
root     10467  0.0  0.1   3616  1380 pts/3    S    11:53   0:00 \
  /bin/sh ./mysqld_safe --ndbcluster --ndb-connectstring=localhost:1186
mysql    10512  0.2  2.5  58528 26636 pts/3    Sl   11:53   0:00 \
  /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql \
  --datadir=/usr/local/mysql/var --user=mysql --ndbcluster \
  --ndb-connectstring=localhost:1186 --pid-file=/usr/local/mysql/var/mothra.pid \
  --log-error=/usr/local/mysql/var/mothra.err
jon      10579  0.0  0.0   2736   688 pts/0    S+   11:54   0:00 grep mysql

If the mysqld process is running as any other user than mysql, you should immediately shut it down and restart it as the mysql user. If this user does not exist on the system, the mysql user account should be created, and this user should be part of the mysql user group; in this case, you should also make sure that the MySQL data directory on this system (as set using the --datadir option for mysqld) is owned by the mysql user, and that the SQL node's my.cnf file includes user=mysql in the [mysqld] section. Alternatively, you can start the MySQL server process with --user=mysql on the command line, but it is preferable to use the my.cnf option, since you might forget to use the command-line option and so have mysqld running as another user unintentionally. The mysqld_safe startup script forces MySQL to run as the mysql user.

如果mysqld进程以mysql以外的任何用户身份运行,则应立即将其关闭并以mysql用户身份重新启动。如果该用户不存在于系统上,则应该创建MySQL用户帐户,并且该用户应该是MySQL用户组的一部分;在这种情况下,还应该确保该系统上的MySQL数据目录(使用MySQL的-DATADIR选项设置)由MySQL用户拥有,SQL节点的my.cnf文件在[mysqld]部分包含user=my sql。或者,您可以在命令行上使用--user=mysql启动mysql服务器进程,但最好使用my.cnf选项,因为您可能忘记使用命令行选项,因此mysqld会无意中作为另一个用户运行。mysqld_safe启动脚本强制mysql作为mysql用户运行。

Important

Never run mysqld as the system root user. Doing so means that potentially any file on the system can be read by MySQL, and thus—should MySQL be compromised—by an attacker.

永远不要以系统根用户的身份运行mysqld。这样做意味着系统上的任何文件都可能被mysql读取,因此mysql应该受到攻击者的攻击。

As mentioned in the previous section (see Section 21.5.12.2, “NDB Cluster and MySQL Privileges”), you should always set a root password for the MySQL Server as soon as you have it running. You should also delete the anonymous user account that is installed by default. You can accomplish these tasks using the following statements:

如前一节所述(请参阅21.5.12.2节,“ndb cluster和mysql特权”),在mysql服务器运行后,您应该始终为其设置根密码。您还应该删除默认安装的匿名用户帐户。您可以使用以下语句完成这些任务:

shell> mysql -u root

mysql> UPDATE mysql.user
    ->     SET Password=PASSWORD('secure_password')
    ->     WHERE User='root';

mysql> DELETE FROM mysql.user
    ->     WHERE User='';

mysql> FLUSH PRIVILEGES;

Be very careful when executing the DELETE statement not to omit the WHERE clause, or you risk deleting all MySQL users. Be sure to run the FLUSH PRIVILEGES statement as soon as you have modified the mysql.user table, so that the changes take immediate effect. Without FLUSH PRIVILEGES, the changes do not take effect until the next time that the server is restarted.

执行delete语句时要非常小心,不要忽略where子句,否则可能会删除所有mysql用户。确保在修改mysql.user表后立即运行flush privileges语句,以便更改立即生效。如果没有刷新权限,更改将在下次重新启动服务器时生效。

Note

Many of the NDB Cluster utilities such as ndb_show_tables, ndb_desc, and ndb_select_all also work without authentication and can reveal table names, schemas, and data. By default these are installed on Unix-style systems with the permissions wxr-xr-x (755), which means they can be executed by any user that can access the mysql/bin directory.

许多ndb集群实用程序(如ndb_show_tables、ndb_desc和ndb_select_)也都可以在没有身份验证的情况下工作,并且可以显示表名、模式和数据。默认情况下,它们安装在具有wxr-xr-x(755)权限的unix样式系统上,这意味着可以由任何可以访问mysql/bin目录的用户执行它们。

See Section 21.4, “NDB Cluster Programs”, for more information about these utilities.

有关这些实用程序的更多信息,请参见第21.4节“ndb群集程序”。

21.5.13 NDB Cluster Disk Data Tables

It is possible to store the nonindexed columns of NDB tables on disk, rather than in RAM.

可以将ndb表的非索引列存储在磁盘上,而不是ram中。

As part of implementing NDB Cluster Disk Data work, a number of improvements were made in NDB Cluster for the efficient handling of very large amounts (terabytes) of data during node recovery and restart. These include a no-steal algorithm for synchronizing a starting node with very large data sets. For more information, see the paper Recovery Principles of NDB Cluster 5.1, by NDB Cluster developers Mikael Ronström and Jonas Oreland.

作为实施ndb集群磁盘数据工作的一部分,为了在节点恢复和重新启动期间高效地处理非常大量(兆字节)的数据,对ndb集群进行了一些改进。其中包括一个“无偷取”算法,用于将启动节点与非常大的数据集同步。有关更多信息,请参阅ndb cluster developers mikael ronstróm和jonas oreland撰写的《ndb cluster 5.1的恢复原则》一文。

NDB Cluster Disk Data performance can be influenced by a number of configuration parameters. For information about these parameters and their effects, see NDB Cluster Disk Data configuration parameters and NDB Cluster Disk Data storage and GCP Stop errors

ndb群集磁盘数据性能可能受许多配置参数的影响。有关这些参数及其影响的信息,请参阅ndb cluster disk data configuration parameters和ndb cluster disk data storage和gcp stop errors

The performance of an NDB Cluster that uses Disk Data storage can also be greatly improved by separating data node file systems from undo log files and tablespace data files, which can be done using symbolic links. For more information, see Section 21.5.13.2, “Using Symbolic Links with Disk Data Objects”.

使用磁盘数据存储的ndb集群的性能也可以大大提高,方法是将数据节点文件系统与undo日志文件和表空间数据文件分开,这可以使用符号链接来完成。有关详细信息,请参阅第21.5.13.2节“使用磁盘数据对象的符号链接”。

21.5.13.1 NDB Cluster Disk Data Objects

NDB Cluster Disk Data storage is implemented using a number of Disk Data objects. These include the following:

ndb集群磁盘数据存储是使用多个磁盘数据对象实现的。其中包括:

  • Tablespaces act as containers for other Disk Data objects.

    表空间充当其他磁盘数据对象的容器。

  • Undo log files undo information required for rolling back transactions.

    撤消日志文件撤消回滚事务所需的信息。

  • One or more undo log files are assigned to a log file group, which is then assigned to a tablespace.

    将一个或多个撤消日志文件分配给一个日志文件组,然后将该日志文件组分配给一个表空间。

  • Data files store Disk Data table data. A data file is assigned directly to a tablespace.

    数据文件存储磁盘数据表数据。数据文件直接分配给表空间。

Undo log files and data files are actual files in the file system of each data node; by default they are placed in ndb_node_id_fs in the DataDir specified in the NDB Cluster config.ini file, and where node_id is the data node's node ID. It is possible to place these elsewhere by specifying either an absolute or relative path as part of the filename when creating the undo log or data file. Statements that create these files are shown later in this section.

撤消日志文件和数据文件是每个数据节点文件系统中的实际文件;默认情况下,它们位于ndb cluster config.ini文件中指定的datadir中的ndb_node_id_fs中,其中node_id是数据节点的节点id。在创建撤消日志或数据文件时,可以通过指定绝对路径或相对路径作为文件名的一部分,将它们放置在其他位置。创建这些文件的语句将在本节后面显示。

NDB Cluster tablespaces and log file groups are not implemented as files.

ndb集群表空间和日志文件组未实现为文件。

Important

Although not all Disk Data objects are implemented as files, they all share the same namespace. This means that each Disk Data object must be uniquely named (and not merely each Disk Data object of a given type). For example, you cannot have a tablespace and a log file group both named dd1.

虽然并非所有磁盘数据对象都实现为文件,但它们都共享相同的命名空间。这意味着每个磁盘数据对象必须有唯一的名称(而不仅仅是给定类型的每个磁盘数据对象)。例如,不能同时具有名为dd1的表空间和日志文件组。

Assuming that you have already set up an NDB Cluster with all nodes (including management and SQL nodes), the basic steps for creating an NDB Cluster table on disk are as follows:

假设您已经建立了一个包含所有节点(包括管理和sql节点)的ndb集群,那么在磁盘上创建ndb集群表的基本步骤如下:

  1. Create a log file group, and assign one or more undo log files to it (an undo log file is also sometimes referred to as an undofile).

    创建日志文件组,并为其分配一个或多个撤消日志文件(撤消日志文件有时也称为撤消文件)。

    Note

    Undo log files are necessary only for Disk Data tables; they are not used for NDBCLUSTER tables that are stored only in memory.

    撤消日志文件仅对磁盘数据表是必需的;它们不用于仅存储在内存中的ndbcluster表。

  2. Create a tablespace; assign the log file group, as well as one or more data files, to the tablespace.

    创建表空间;将日志文件组以及一个或多个数据文件分配给表空间。

  3. Create a Disk Data table that uses this tablespace for data storage.

    创建使用此表空间进行数据存储的磁盘数据表。

Each of these tasks can be accomplished using SQL statements in the mysql client or other MySQL client application, as shown in the example that follows.

这些任务中的每一个都可以在mysql客户机或其他mysql客户机应用程序中使用sql语句来完成,如下例所示。

  1. We create a log file group named lg_1 using CREATE LOGFILE GROUP. This log file group is to be made up of two undo log files, which we name undo_1.log and undo_2.log, whose initial sizes are 16 MB and 12 MB, respectively. (The default initial size for an undo log file is 128 MB.) Optionally, you can also specify a size for the log file group's undo buffer, or permit it to assume the default value of 8 MB. In this example, we set the UNDO buffer's size at 2 MB. A log file group must be created with an undo log file; so we add undo_1.log to lg_1 in this CREATE LOGFILE GROUP statement:

    我们使用create log file group创建一个名为lg_1的日志文件组。这个日志文件组将由两个undo日志文件组成,我们将其命名为undo_1.log和undo_2.log,其初始大小分别为16MB和12MB。(撤消日志文件的默认初始大小为128 MB。)也可以指定日志文件组的撤消缓冲区的大小,或允许其假定默认值为8 MB。在本例中,我们将撤消缓冲区的大小设置为2 MB。必须使用undo日志文件创建日志文件组;因此,我们在此create log file group语句中将undo_1.log添加到lg_1:

    CREATE LOGFILE GROUP lg_1
        ADD UNDOFILE 'undo_1.log'
        INITIAL_SIZE 16M
        UNDO_BUFFER_SIZE 2M
        ENGINE NDBCLUSTER;
    

    To add undo_2.log to the log file group, use the following ALTER LOGFILE GROUP statement:

    要将undo_2.log添加到日志文件组,请使用以下alter log file group语句:

    ALTER LOGFILE GROUP lg_1
        ADD UNDOFILE 'undo_2.log'
        INITIAL_SIZE 12M
        ENGINE NDBCLUSTER;
    

    Some items of note:

    一些注意事项:

    • The .log file extension used here is not required. We use it merely to make the log files easily recognizable.

      此处使用的.log文件扩展名不是必需的。我们使用它只是为了使日志文件易于识别。

    • Every CREATE LOGFILE GROUP and ALTER LOGFILE GROUP statement must include an ENGINE option. The only permitted values for this option are NDBCLUSTER and NDB.

      每个create logfile group和alter logfile group语句都必须包含一个引擎选项。此选项唯一允许的值是ndbcluster和ndb。

      Important

      There can exist at most one log file group in the same NDB Cluster at any given time.

      在同一NDB集群中,在任何给定的时间内最多可以存在一个日志文件组。

    • When you add an undo log file to a log file group using ADD UNDOFILE 'filename', a file with the name filename is created in the ndb_node_id_fs directory within the DataDir of each data node in the cluster, where node_id is the node ID of the data node. Each undo log file is of the size specified in the SQL statement. For example, if an NDB Cluster has 4 data nodes, then the ALTER LOGFILE GROUP statement just shown creates 4 undo log files, 1 each on in the data directory of each of the 4 data nodes; each of these files is named undo_2.log and each file is 12 MB in size.

      使用add undo file'file name'将撤消日志文件添加到日志文件组时,将在群集中每个数据节点的datadir内的ndb_node_id_fs目录中创建一个名为filename的文件,其中node_id是数据节点的节点id。每个撤消日志文件的大小都是在sql语句中指定的。例如,如果一个ndb集群有4个数据节点,那么刚才显示的alter log file group语句将创建4个undo日志文件,每个文件都位于4个数据节点的数据目录中;每个文件都名为undo_2.log,每个文件的大小为12 MB。

    • UNDO_BUFFER_SIZE is limited by the amount of system memory available.

      撤消缓冲区大小受可用系统内存量的限制。

    • For more information about the CREATE LOGFILE GROUP statement, see Section 13.1.15, “CREATE LOGFILE GROUP Syntax”. For more information about ALTER LOGFILE GROUP, see Section 13.1.5, “ALTER LOGFILE GROUP Syntax”.

      有关create logfile group语句的更多信息,请参阅13.1.15节,“create logfile group syntax”。有关alter logfile group的更多信息,请参见第13.1.5节“alter logfile group syntax”。

  2. Now we can create a tablespace, which contains files to be used by NDB Cluster Disk Data tables for storing their data. A tablespace is also associated with a particular log file group. When creating a new tablespace, you must specify the log file group which it is to use for undo logging; you must also specify a data file. You can add more data files to the tablespace after the tablespace is created; it is also possible to drop data files from a tablespace (an example of dropping data files is provided later in this section).

    现在我们可以创建一个表空间,其中包含ndb集群磁盘数据表用于存储其数据的文件。表空间还与特定的日志文件组相关联。创建新表空间时,必须指定要用于撤消日志记录的日志文件组;还必须指定数据文件。创建表空间后,可以向表空间添加更多的数据文件;也可以从表空间中删除数据文件(本节后面提供了一个删除数据文件的示例)。

    Assume that we wish to create a tablespace named ts_1 which uses lg_1 as its log file group. This tablespace is to contain two data files named data_1.dat and data_2.dat, whose initial sizes are 32 MB and 48 MB, respectively. (The default value for INITIAL_SIZE is 128 MB.) We can do this using two SQL statements, as shown here:

    假设我们希望创建一个名为ts_1的表空间,该表空间使用lg_1作为其日志文件组。此表空间将包含两个名为data_1.dat和data_2.dat的数据文件,其初始大小分别为32 MB和48 MB。(初始大小的默认值是128 MB。)我们可以使用两个SQL语句来执行此操作,如下所示:

    CREATE TABLESPACE ts_1
        ADD DATAFILE 'data_1.dat'
        USE LOGFILE GROUP lg_1
        INITIAL_SIZE 32M
        ENGINE NDBCLUSTER;
    
    ALTER TABLESPACE ts_1
        ADD DATAFILE 'data_2.dat'
        INITIAL_SIZE 48M
        ENGINE NDBCLUSTER;
    

    The CREATE TABLESPACE statement creates a tablespace ts_1 with the data file data_1.dat, and associates ts_1 with log file group lg_1. The ALTER TABLESPACE adds the second data file (data_2.dat).

    create tablespace语句创建一个表空间ts_1和数据文件dat a_1.dat,并将ts_1与日志文件组lg_1关联。alter表空间添加第二个数据文件(data_2.dat)。

    Some items of note:

    一些注意事项:

    • As is the case with the .log file extension used in this example for undo log files, there is no special significance for the .dat file extension; it is used merely for easy recognition of data files.

      与本例中用于撤消日志文件的.log文件扩展名的情况一样,.dat文件扩展名没有特殊意义;它仅用于轻松识别数据文件。

    • When you add a data file to a tablespace using ADD DATAFILE 'filename', a file with the name filename is created in the ndb_node_id_fs directory within the DataDir of each data node in the cluster, where node_id is the node ID of the data node. Each data file is of the size specified in the SQL statement. For example, if an NDB Cluster has 4 data nodes, then the ALTER TABLESPACE statement just shown creates 4 data files, 1 each in the data directory of each of the 4 data nodes; each of these files is named data_2.dat and each file is 48 MB in size.

      使用add data file'file name'将数据文件添加到表空间时,将在群集中每个数据节点的datadir内的ndb_node_id_fs目录中创建一个名为filename的文件,其中node_id是数据节点的节点id。每个数据文件的大小都是在sql语句中指定的。例如,如果一个ndb集群有4个数据节点,那么刚才显示的alter tablespace语句将创建4个数据文件,每个文件位于4个数据节点的每个数据目录中;每个文件都命名为data_2.dat,每个文件的大小为48 MB。

    • All CREATE TABLESPACE and ALTER TABLESPACE statements must contain an ENGINE clause; only tables using the same storage engine as the tablespace can be created in the tablespace. For NDB Cluster tablespaces, the only permitted values for this option are NDBCLUSTER and NDB.

      所有CREATE TABLESPACE和ALTER TABLESPACE语句都必须包含ENGINE子句;只有使用与表空间相同的存储引擎的表才能在表空间中创建。对于ndb cluster表空间,此选项唯一允许的值是ndbcluster和ndb。

    • For more information about the CREATE TABLESPACE and ALTER TABLESPACE statements, see Section 13.1.19, “CREATE TABLESPACE Syntax”, and Section 13.1.9, “ALTER TABLESPACE Syntax”.

      有关create tablespace和alter tablespace语句的更多信息,请参见13.1.19节“创建表空间语法”和13.1.9节“更改表空间语法”。

  3. Now it is possible to create a table whose nonindexed columns are stored on disk in the tablespace ts_1:

    现在可以创建一个表,该表的非索引列存储在磁盘上的表空间ts_1中:

    CREATE TABLE dt_1 (
        member_id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
        last_name VARCHAR(50) NOT NULL,
        first_name VARCHAR(50) NOT NULL,
        dob DATE NOT NULL,
        joined DATE NOT NULL,
        INDEX(last_name, first_name)
        )
        TABLESPACE ts_1 STORAGE DISK
        ENGINE NDBCLUSTER;
    

    The TABLESPACE ... STORAGE DISK option tells the NDBCLUSTER storage engine to use tablespace ts_1 for disk data storage.

    表空间…存储磁盘选项告诉ndbcluster存储引擎使用表空间ts_1进行磁盘数据存储。

    Once table ts_1 has been created as shown, you can perform INSERT, SELECT, UPDATE, and DELETE statements on it just as you would with any other MySQL table.

    如图所示创建表ts_1后,可以对其执行insert、select、update和delete语句,就像对任何其他mysql表一样。

    It is also possible to specify whether an individual column is stored on disk or in memory by using a STORAGE clause as part of the column's definition in a CREATE TABLE or ALTER TABLE statement. STORAGE DISK causes the column to be stored on disk, and STORAGE MEMORY causes in-memory storage to be used. See Section 13.1.18, “CREATE TABLE Syntax”, for more information.

    通过在CREATETABLE或ALTERTABLE语句中将存储子句用作列定义的一部分,还可以指定单个列是存储在磁盘上还是存储在内存中。存储磁盘导致列存储在磁盘上,存储内存导致使用内存中的存储。有关详细信息,请参见第13.1.18节“创建表语法”。

Indexing of columns implicitly stored on disk.  For table dt_1 as defined in the example just shown, only the dob and joined columns are stored on disk. This is because there are indexes on the id, last_name, and first_name columns, and so data belonging to these columns is stored in RAM. Only nonindexed columns can be held on disk; indexes and indexed column data continue to be stored in memory. This tradeoff between the use of indexes and conservation of RAM is something you must keep in mind as you design Disk Data tables.

在磁盘上隐式存储列的索引。对于刚刚显示的示例中定义的表dt_1,只有dob列和连接列存储在磁盘上。这是因为id、last_name和first_name列上有索引,所以属于这些列的数据存储在ram中。磁盘上只能保存未编制索引的列;索引和已编制索引的列数据继续存储在内存中。在设计磁盘数据表时,必须记住索引的使用和ram的保存之间的这种权衡。

You cannot add an index to a column that has been explicitly declared STORAGE DISK, without first changing its storage type to MEMORY; any attempt to do so fails with an error. A column which implicitly uses disk storage can be indexed; when this is done, the column's storage type is changed to MEMORY automatically. By implicitly, we mean a column whose storage type is not declared, but which is which inherited from the parent table. In the following CREATE TABLE statement (using the tablespace ts_1 defined previously), columns c2 and c3 use disk storage implicitly:

如果列已显式声明为存储磁盘,则必须先将其存储类型更改为内存,否则无法向该列添加索引;任何尝试都将失败,并出现错误。可以为隐式使用磁盘存储的列编制索引;完成此操作后,该列的存储类型将自动更改为内存。“隐式”是指存储类型未声明但从父表继承的列。在下面的CREATE TABLE语句(使用前面定义的表空间ts_1)中,列C2和C3隐式使用磁盘存储:

mysql> CREATE TABLE ti (
    ->     c1 INT PRIMARY KEY,
    ->     c2 INT,
    ->     c3 INT,
    ->     c4 INT
    -> )
    ->     STORAGE DISK
    ->     TABLESPACE ts_1
    ->     ENGINE NDBCLUSTER;
Query OK, 0 rows affected (1.31 sec)

Because c2, c3, and c4 are themselves not declared with STORAGE DISK, it is possible to index them. Here, we add indexes to c2 and c3, using, respectively, CREATE INDEX and ALTER TABLE:

因为c2、c3和c4本身不是用存储磁盘声明的,所以可以对它们进行索引。在这里,我们向c2和c3添加索引,分别使用create index和alter table:

mysql> CREATE INDEX i1 ON ti(c2);
Query OK, 0 rows affected (2.72 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE ti ADD INDEX i2(c3);
Query OK, 0 rows affected (0.92 sec)
Records: 0  Duplicates: 0  Warnings: 0

SHOW CREATE TABLE confirms that the indexes were added.

show create table确认已添加索引。

mysql> SHOW CREATE TABLE ti\G
*************************** 1. row ***************************
       Table: ti
Create Table: CREATE TABLE `ti` (
  `c1` int(11) NOT NULL,
  `c2` int(11) DEFAULT NULL,
  `c3` int(11) DEFAULT NULL,
  `c4` int(11) DEFAULT NULL,
  PRIMARY KEY (`c1`),
  KEY `i1` (`c2`),
  KEY `i2` (`c3`)
) /*!50100 TABLESPACE `ts_1` STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

You can see using ndb_desc that the indexed columns (emphasized text) now use in-memory rather than on-disk storage:

可以看到索引列(强调文本)现在使用的是内存而不是磁盘存储:

shell> ./ndb_desc -d test t1
-- t1 --
Version: 33554433
Fragment type: HashMapPartition
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 4
Number of primary keys: 1
Length of frm data: 317
Max Rows: 0
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
PartitionCount: 4
FragmentCount: 4
PartitionBalance: FOR_RP_BY_LDM
ExtraRowGciBits: 0
ExtraRowAuthorBits: 0
TableStatus: Retrieved
Table options:
HashMap: DEFAULT-HASHMAP-3840-4
-- Attributes --
c1 Int PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY
c2 Int NULL AT=FIXED ST=MEMORY
c3 Int NULL AT=FIXED ST=MEMORY
c4 Int NULL AT=FIXED ST=DISK
-- Indexes --
PRIMARY KEY(c1) - UniqueHashIndex
i2(c3) - OrderedIndex
PRIMARY(c1) - OrderedIndex
i1(c2) - OrderedIndex

NDBT_ProgramExit: 0 - OK

Performance note.  The performance of a cluster using Disk Data storage is greatly improved if Disk Data files are kept on a separate physical disk from the data node file system. This must be done for each data node in the cluster to derive any noticeable benefit.

性能说明。如果将磁盘数据文件与数据节点文件系统保存在单独的物理磁盘上,那么使用磁盘数据存储的集群的性能将大大提高。必须为集群中的每个数据节点执行此操作,才能获得任何显著的好处。

You may use absolute and relative file system paths with ADD UNDOFILE and ADD DATAFILE. Relative paths are calculated relative to the data node's data directory. You may also use symbolic links; see Section 21.5.13.2, “Using Symbolic Links with Disk Data Objects”, for more information and examples.

您可以将绝对和相对文件系统路径与add undofile和add datafile一起使用。相对路径是相对于数据节点的数据目录计算的。您也可以使用符号链接;有关更多信息和示例,请参阅21.5.13.2节,“将符号链接用于磁盘数据对象”。

A log file group, a tablespace, and any Disk Data tables using these must be created in a particular order. The same is true for dropping any of these objects:

日志文件组、表空间和使用它们的任何磁盘数据表都必须按特定顺序创建。删除这些对象也是如此:

  • A log file group cannot be dropped as long as any tablespaces are using it.

    只要有任何表空间在使用日志文件组,就不能将其删除。

  • A tablespace cannot be dropped as long as it contains any data files.

    只要表空间包含任何数据文件,就不能删除它。

  • You cannot drop any data files from a tablespace as long as there remain any tables which are using the tablespace.

    只要还有任何使用表空间的表,就不能从表空间中删除任何数据文件。

  • It is not possible to drop files created in association with a different tablespace than the one with which the files were created. (Bug #20053)

    不能删除与创建文件所用的表空间相关联的文件。(错误20053)

For example, to drop all the objects created so far in this section, you would use the following statements:

例如,要删除本节中迄今为止创建的所有对象,可以使用以下语句:

mysql> DROP TABLE dt_1;

mysql> ALTER TABLESPACE ts_1
    -> DROP DATAFILE 'data_2.dat'
    -> ENGINE NDBCLUSTER;

mysql> ALTER TABLESPACE ts_1
    -> DROP DATAFILE 'data_1.dat'
    -> ENGINE NDBCLUSTER;

mysql> DROP TABLESPACE ts_1
    -> ENGINE NDBCLUSTER;

mysql> DROP LOGFILE GROUP lg_1
    -> ENGINE NDBCLUSTER;

These statements must be performed in the order shown, except that the two ALTER TABLESPACE ... DROP DATAFILE statements may be executed in either order.

这些语句必须按照显示的顺序执行,除了两个alter表空间…drop datafile语句可以按任意顺序执行。

You can obtain information about data files used by Disk Data tables by querying the FILES table in the INFORMATION_SCHEMA database. An extra NULL row provides additional information about undo log files. For more information and examples, see Section 24.9, “The INFORMATION_SCHEMA FILES Table”.

通过查询信息架构数据库中的文件表,可以获取有关磁盘数据表使用的数据文件的信息。额外的“空行”提供有关撤消日志文件的附加信息。有关更多信息和示例,请参阅24.9节“information_schema files表”。

21.5.13.2 Using Symbolic Links with Disk Data Objects

The performance of an NDB Cluster that uses Disk Data storage can be greatly improved by separating data node file systems from undo log files and tablespace data files and placing these on different disks. In early versions of NDB Cluster, there was no direct support for this in NDB Cluster, but it was possible to achieve this separation using symbolic links as described in this section. NDB Cluster supports the data node configuration parameters FileSystemPathDD, FileSystemPathDataFiles, and FileSystemPathUndoFiles, which make the use of symbolic links for this purpose unnecessary. For more information about these parameters, see Disk Data file system parameters.

使用磁盘数据存储的ndb集群的性能可以通过将数据节点文件系统从撤消日志文件和表空间数据文件中分离出来并将它们放在不同的磁盘上而大大提高。在早期版本的ndb集群中,在ndb集群中没有直接的支持,但是可以使用本节中描述的符号链接来实现这种分离。ndb cluster支持数据节点配置参数filesystem移情dd、filesystem移情datafiles和filesystem移情undofiles,这使得不必为此目的使用符号链接。有关这些参数的详细信息,请参阅磁盘数据文件系统参数。

Each data node in the cluster creates a file system in the directory named ndb_node_id_fs under the data node's DataDir as defined in the config.ini file. In this example, we assume that each data node host has 3 disks, aliased as /data0, /data1, and /data2, and that the cluster's config.ini includes the following:

群集中的每个数据节点在config.ini文件中定义的数据节点的datadir下名为ndb_node_id_fs的目录中创建一个文件系统。在本例中,我们假设每个数据节点主机有3个磁盘,别名为/data0、/data1和/data2,并且集群的config.ini包含以下内容:

[ndbd default]
DataDir= /data0

Our objective is to place all Disk Data log files in /data1, and all Disk Data data files in /data2, on each data node host.

我们的目标是将所有磁盘数据日志文件放在/data1中,将所有磁盘数据文件放在/data2中,放在每个数据节点主机上。

Note

In this example, we assume that the cluster's data node hosts are all using Linux operating systems. For other platforms, you may need to substitute you operating system's commands for those shown here.

在本例中,我们假设集群的数据节点主机都使用Linux操作系统。对于其他平台,可能需要用操作系统的命令替换此处显示的命令。

To accomplish this, perform the following steps:

为此,请执行以下步骤:

  • Under the data node file system create symbolic links pointing to the other drives:

    在数据节点文件系统下,创建指向其他驱动器的符号链接:

    shell> cd /data0/ndb_2_fs
    shell> ls
    D1  D10  D11  D2  D8  D9  LCP
    shell> ln -s /data0 dnlogs
    shell> ln -s /data1 dndata
    

    You should now have two symbolic links:

    现在应该有两个符号链接:

    shell> ls -l --hide=D*
    lrwxrwxrwx 1 user group   30 2007-03-19 13:58 dndata -> /data1
    lrwxrwxrwx 1 user group   30 2007-03-19 13:59 dnlogs -> /data2
    

    We show this only for the data node with node ID 2; however, you must do this for each data node.

    我们仅对节点id为2的数据节点显示此信息;但是,必须对每个数据节点执行此操作。

  • Now, in the mysql client, create a log file group and tablespace using the symbolic links, as shown here:

    现在,在mysql客户机中,使用符号链接创建一个日志文件组和表空间,如下所示:

    mysql> CREATE LOGFILE GROUP lg1
        ->    ADD UNDOFILE 'dnlogs/undo1.log'
        ->    INITIAL_SIZE 150M
        ->    UNDO_BUFFER_SIZE = 1M
        ->    ENGINE=NDBCLUSTER;
    
    mysql> CREATE TABLESPACE ts1
        ->    ADD DATAFILE 'dndata/data1.log'
        ->    USE LOGFILE GROUP lg1
        ->    INITIAL_SIZE 1G
        ->    ENGINE=NDBCLUSTER;
    

    Verify that the files were created and placed correctly as shown here:

    验证文件的创建和放置是否正确,如下所示:

    shell> cd /data1
    shell> ls -l
    total 2099304
    -rw-rw-r--  1 user group 157286400 2007-03-19 14:02 undo1.dat
    
    shell> cd /data2
    shell> ls -l
    total 2099304
    -rw-rw-r--  1 user group 1073741824 2007-03-19 14:02 data1.dat
    
  • If you are running multiple data nodes on one host, you must take care to avoid having them try to use the same space for Disk Data files. You can make this easier by creating a symbolic link in each data node file system. Suppose you are using /data0 for both data node file systems, but you wish to have the Disk Data files for both nodes on /data1. In this case, you can do something similar to what is shown here:

    如果在一台主机上运行多个数据节点,则必须注意避免让它们尝试对磁盘数据文件使用相同的空间。可以通过在每个数据节点文件系统中创建符号链接来简化此过程。假设您对两个数据节点文件系统都使用/data0,但希望/data1上的两个节点都有磁盘数据文件。在这种情况下,您可以执行与此处所示类似的操作:

    shell> cd /data0
    shell> ln -s /data1/dn2 ndb_2_fs/dd
    shell> ln -s /data1/dn3 ndb_3_fs/dd
    shell> ls -l --hide=D* ndb_2_fs
    lrwxrwxrwx 1 user group   30 2007-03-19 14:22 dd -> /data1/dn2
    shell> ls -l --hide=D* ndb_3_fs
    lrwxrwxrwx 1 user group   30 2007-03-19 14:22 dd -> /data1/dn3
    
  • Now you can create a logfile group and tablespace using the symbolic link, like this:

    现在可以使用符号链接创建日志文件组和表空间,如下所示:

    mysql> CREATE LOGFILE GROUP lg1
        ->    ADD UNDOFILE 'dd/undo1.log'
        ->    INITIAL_SIZE 150M
        ->    UNDO_BUFFER_SIZE = 1M
        ->    ENGINE=NDBCLUSTER;
    
    mysql> CREATE TABLESPACE ts1
        ->    ADD DATAFILE 'dd/data1.log'
        ->    USE LOGFILE GROUP lg1
        ->    INITIAL_SIZE 1G
        ->    ENGINE=NDBCLUSTER;
    

    Verify that the files were created and placed correctly as shown here:

    验证文件的创建和放置是否正确,如下所示:

    shell> cd /data1
    shell> ls
    dn2        dn3
    shell> ls dn2
    undo1.log        data1.log
    shell> ls dn3
    undo1.log        data1.log
    

21.5.13.3 NDB Cluster Disk Data Storage Requirements

The following items apply to Disk Data storage requirements:

以下各项适用于磁盘数据存储要求:

  • Variable-length columns of Disk Data tables take up a fixed amount of space. For each row, this is equal to the space required to store the largest possible value for that column.

    磁盘数据表的可变长度列占用固定的空间。对于每一行,这等于存储该列的最大可能值所需的空间。

    For general information about calculating these values, see Section 11.8, “Data Type Storage Requirements”.

    有关计算这些值的一般信息,请参见第11.8节“数据类型存储要求”。

    You can obtain an estimate the amount of space available in data files and undo log files by querying the INFORMATION_SCHEMA.FILES table. For more information and examples, see Section 24.9, “The INFORMATION_SCHEMA FILES Table”.

    通过查询information schema.files表,可以获得数据文件和撤消日志文件中可用空间的估计值。有关更多信息和示例,请参阅24.9节“information_schema files表”。

    Note

    The OPTIMIZE TABLE statement does not have any effect on Disk Data tables.

    optimize table语句对磁盘数据表没有任何影响。

  • In a Disk Data table, the first 256 bytes of a TEXT or BLOB column are stored in memory; only the remainder is stored on disk.

    在磁盘数据表中,文本或blob列的前256个字节存储在内存中;只有其余的字节存储在磁盘上。

  • Each row in a Disk Data table uses 8 bytes in memory to point to the data stored on disk. This means that, in some cases, converting an in-memory column to the disk-based format can actually result in greater memory usage. For example, converting a CHAR(4) column from memory-based to disk-based format increases the amount of DataMemory used per row from 4 to 8 bytes.

    磁盘数据表中的每一行使用内存中的8个字节来指向存储在磁盘上的数据。这意味着,在某些情况下,将内存中的列转换为基于磁盘的格式实际上会导致更大的内存使用量。例如,将char(4)列从基于内存的格式转换为基于磁盘的格式会将每行使用的数据内存量从4字节增加到8字节。

Important

Starting the cluster with the --initial option does not remove Disk Data files. You must remove these manually prior to performing an initial restart of the cluster.

使用--initial选项启动群集不会删除磁盘数据文件。在执行群集的初始重新启动之前,必须手动删除这些文件。

Performance of Disk Data tables can be improved by minimizing the number of disk seeks by making sure that DiskPageBufferMemory is of sufficient size. You can query the diskpagebuffer table to help determine whether the value for this parameter needs to be increased.

通过确保diskpagebuffermemory具有足够的大小,可以通过最小化磁盘查找的数量来提高磁盘数据表的性能。您可以查询diskpagebuffer表,以帮助确定是否需要增加此参数的值。

21.5.14 Online Operations with ALTER TABLE in NDB Cluster

MySQL NDB Cluster 7.5 supports online table schema changes using the standard ALTER TABLE syntax employed by the MySQL Server (ALGORITHM=DEFAULT|INPLACE|COPY), and described elsewhere.

mysql ndb cluster 7.5使用mysql服务器使用的标准alter table语法(algorithm=default inplace copy)支持在线表模式更改,并在其他地方进行了描述。

Note

Some older releases of NDB Cluster used a syntax specific to NDB for online ALTER TABLE operations. That syntax has since been removed.

一些旧版本的ndb集群使用ndb特有的语法进行在线alter table操作。从那以后,这个语法就被删除了。

Operations that add and drop indexes on variable-width columns of NDB tables occur online. Online operations are noncopying; that is, they do not require that indexes be re-created. They do not lock the table being altered from access by other API nodes in an NDB Cluster (but see Limitations of NDB online operations, later in this section). Such operations do not require single user mode for NDB table alterations made in an NDB cluster with multiple API nodes; transactions can continue uninterrupted during online DDL operations.

在ndb表的可变宽度列上添加和删除索引的操作联机进行。联机操作是非复制的;也就是说,它们不要求重新创建索引。它们不会锁定被ndb集群中的其他api节点更改的表(但请参阅本节后面的ndb联机操作限制)。对于在具有多个api节点的ndb集群中进行的ndb表更改,此类操作不需要单用户模式;在联机ddl操作期间,事务可以不间断地继续。

ALGORITHM=INPLACE can be used to perform online ADD COLUMN, ADD INDEX (including CREATE INDEX statements), and DROP INDEX operations on NDB tables. Online renaming of NDB tables is also supported.

algorithm=inplace可用于对ndb表执行联机添加列、添加索引(包括创建索引语句)和删除索引操作。还支持对ndb表进行联机重命名。

Currently you cannot add disk-based columns to NDB tables online. This means that, if you wish to add an in-memory column to an NDB table that uses a table-level STORAGE DISK option, you must declare the new column as using memory-based storage explicitly. For example—assuming that you have already created tablespace ts1—suppose that you create table t1 as follows:

当前无法在线向ndb表添加基于磁盘的列。这意味着,如果要向使用表级存储磁盘选项的ndb表中添加in memory列,则必须显式地将新列声明为使用基于内存的存储。例如,假设您已经创建了表空间ts1,假设您创建了表t1,如下所示:

mysql> CREATE TABLE t1 (
     >     c1 INT NOT NULL PRIMARY KEY,
     >     c2 VARCHAR(30)
     >     )
     >     TABLESPACE ts1 STORAGE DISK
     >     ENGINE NDB;
Query OK, 0 rows affected (1.73 sec)
Records: 0  Duplicates: 0  Warnings: 0

You can add a new in-memory column to this table online as shown here:

您可以在线向该表添加一个新的内存列,如下所示:

mysql> ALTER TABLE t1
     >     ADD COLUMN c3 INT COLUMN_FORMAT DYNAMIC STORAGE MEMORY,
     >     ALGORITHM=INPLACE;
Query OK, 0 rows affected (1.25 sec)
Records: 0  Duplicates: 0  Warnings: 0

This statement fails if the STORAGE MEMORY option is omitted:

如果省略存储内存选项,则此语句将失败:

mysql> ALTER TABLE t1
     >     ADD COLUMN c4 INT COLUMN_FORMAT DYNAMIC,
     >     ALGORITHM=INPLACE;
ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason:
Adding column(s) or add/reorganize partition not supported online. Try
ALGORITHM=COPY.

If you omit the COLUMN_FORMAT DYNAMIC option, the dynamic column format is employed automatically, but a warning is issued, as shown here:

如果省略“列格式动态”选项,则会自动采用动态列格式,但会发出警告,如下所示:

mysql> ALTER ONLINE TABLE t1 ADD COLUMN c4 INT STORAGE MEMORY;
Query OK, 0 rows affected, 1 warning (1.17 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> SHOW WARNINGS\G
*************************** 1. row ***************************
  Level: Warning
   Code: 1478
Message: DYNAMIC column c4 with STORAGE DISK is not supported, column will
become FIXED


mysql> SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `c1` int(11) NOT NULL,
  `c2` varchar(30) DEFAULT NULL,
  `c3` int(11) /*!50606 STORAGE MEMORY */ /*!50606 COLUMN_FORMAT DYNAMIC */ DEFAULT NULL,
  `c4` int(11) /*!50606 STORAGE MEMORY */ DEFAULT NULL,
  PRIMARY KEY (`c1`)
) /*!50606 TABLESPACE ts_1 STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1
1 row in set (0.03 sec)
Note

The STORAGE and COLUMN_FORMAT keywords are supported only in NDB Cluster; in any other version of MySQL, attempting to use either of these keywords in a CREATE TABLE or ALTER TABLE statement results in an error.

存储和列格式关键字仅在ndb集群中受支持;在任何其他版本的mysql中,试图在create table或alter table语句中使用这些关键字之一将导致错误。

It is also possible to use the statement ALTER TABLE ... REORGANIZE PARTITION, ALGORITHM=INPLACE with no partition_names INTO (partition_definitions) option on NDB tables. This can be used to redistribute NDB Cluster data among new data nodes that have been added to the cluster online. This does not perform any defragmentation, which requires an OPTIMIZE TABLE or null ALTER TABLE statement. For more information, see Section 21.5.15, “Adding NDB Cluster Data Nodes Online”.

也可以使用语句alter table…在NDB表中重新组织分区,算法=不带分区的名称(分区定义)选项。这可用于在已在线添加到群集的新数据节点之间重新分发ndb群集数据。这不执行任何碎片整理,这需要优化表或空的alter table语句。有关更多信息,请参阅21.5.15节“在线添加ndb集群数据节点”。

Limitations of NDB online operations

Online DROP COLUMN operations are not supported.

不支持联机删除列操作。

Online ALTER TABLE, CREATE INDEX, or DROP INDEX statements that add columns or add or drop indexes are subject to the following limitations:

添加列或添加或删除索引的联机alter table、create index或drop index语句受以下限制:

  • A given online ALTER TABLE can use only one of ADD COLUMN, ADD INDEX, or DROP INDEX. One or more columns can be added online in a single statement; only one index may be created or dropped online in a single statement.

    给定的联机alter表只能使用add column、add index或drop index中的一个。可以在单个语句中联机添加一个或多个列;在单个语句中只能联机创建或删除一个索引。

  • The table being altered is not locked with respect to API nodes other than the one on which an online ALTER TABLE ADD COLUMN, ADD INDEX, or DROP INDEX operation (or CREATE INDEX or DROP INDEX statement) is run. However, the table is locked against any other operations originating on the same API node while the online operation is being executed.

    除了运行联机alter table add column、add index或drop index操作(或create index或drop index语句)的api节点外,被更改的表不会被锁定。但是,在执行联机操作时,该表将针对源自同一api节点的任何其他操作锁定。

  • The table to be altered must have an explicit primary key; the hidden primary key created by the NDB storage engine is not sufficient for this purpose.

    要更改的表必须具有显式主键;由ndb存储引擎创建的隐藏主键不足以用于此目的。

  • The storage engine used by the table cannot be changed online.

    无法在线更改表使用的存储引擎。

  • When used with NDB Cluster Disk Data tables, it is not possible to change the storage type (DISK or MEMORY) of a column online. This means, that when you add or drop an index in such a way that the operation would be performed online, and you want the storage type of the column or columns to be changed, you must use ALGORITHM=COPY in the statement that adds or drops the index.

    当与ndb cluster disk data tables一起使用时,无法在线更改列的存储类型(磁盘或内存)。这意味着,当添加或删除索引时,如果操作将联机执行,并且希望更改一列或多列的存储类型,则必须在添加或删除索引的语句中使用algorithm=copy。

Columns to be added online cannot use the BLOB or TEXT type, and must meet the following criteria:

要联机添加的列不能使用blob或文本类型,并且必须满足以下条件:

  • The columns must be dynamic; that is, it must be possible to create them using COLUMN_FORMAT DYNAMIC. If you omit the COLUMN_FORMAT DYNAMIC option, the dynamic column format is employed automatically.

    列必须是动态的;也就是说,必须可以使用column_format dynamic创建它们。如果省略“列格式动态”选项,则会自动采用动态列格式。

  • The columns must permit NULL values and not have any explicit default value other than NULL. Columns added online are automatically created as DEFAULT NULL, as can be seen here:

    列必须允许空值,并且除了空值之外没有任何显式默认值。在线添加的列将自动创建为默认空值,如下所示:

    mysql> CREATE TABLE t2 (
         >     c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY
         >     ) ENGINE=NDB;
    Query OK, 0 rows affected (1.44 sec)
    
    mysql> ALTER TABLE t2
         >     ADD COLUMN c2 INT,
         >     ADD COLUMN c3 INT,
         >     ALGORITHM=INPLACE;
    Query OK, 0 rows affected, 2 warnings (0.93 sec)
    
    mysql> SHOW CREATE TABLE t1\G
    *************************** 1. row ***************************
           Table: t1
    Create Table: CREATE TABLE `t2` (
      `c1` int(11) NOT NULL AUTO_INCREMENT,
      `c2` int(11) DEFAULT NULL,
      `c3` int(11) DEFAULT NULL,
      PRIMARY KEY (`c1`)
    ) ENGINE=ndbcluster DEFAULT CHARSET=latin1
    1 row in set (0.00 sec)
    
  • The columns must be added following any existing columns. If you attempt to add a column online before any existing columns or using the FIRST keyword, the statement fails with an error.

    必须在任何现有列之后添加列。如果试图在任何现有列之前添加列或使用第一个关键字,则语句会出现错误。

  • Existing table columns cannot be reordered online.

    现有的表列不能重新排序。

For online ALTER TABLE operations on NDB tables, fixed-format columns are converted to dynamic when they are added online, or when indexes are created or dropped online, as shown here (repeating the CREATE TABLE and ALTER TABLE statements just shown for the sake of clarity):

对于ndb表上的联机alter table操作,固定格式列在联机添加或联机创建或删除索引时将转换为动态列,如下所示(为了清楚起见,重复刚才显示的create table和alter table语句):

mysql> CREATE TABLE t2 (
     >     c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY
     >     ) ENGINE=NDB;
Query OK, 0 rows affected (1.44 sec)

mysql> ALTER TABLE t2
     >     ADD COLUMN c2 INT,
     >     ADD COLUMN c3 INT,
     >     ALGORITHM=INPLACE;
Query OK, 0 rows affected, 2 warnings (0.93 sec)

mysql> SHOW WARNINGS;
*************************** 1. row ***************************
  Level: Warning
   Code: 1478
Message: Converted FIXED field 'c2' to DYNAMIC to enable online ADD COLUMN
*************************** 2. row ***************************
  Level: Warning
   Code: 1478
Message: Converted FIXED field 'c3' to DYNAMIC to enable online ADD COLUMN
2 rows in set (0.00 sec)

Only the column or columns to be added online must be dynamic. Existing columns need not be; this includes the table's primary key, which may also be FIXED, as shown here:

只有要联机添加的列必须是动态的。现有列不需要;这包括表的主键,也可以是固定的,如这里所示:

mysql> CREATE TABLE t3 (
     >     c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY COLUMN_FORMAT FIXED
     >     ) ENGINE=NDB;
Query OK, 0 rows affected (2.10 sec)

mysql> ALTER TABLE t3 ADD COLUMN c2 INT, ALGORITHM=INPLACE;
Query OK, 0 rows affected, 1 warning (0.78 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> SHOW WARNINGS;
*************************** 1. row ***************************
  Level: Warning
   Code: 1478
Message: Converted FIXED field 'c2' to DYNAMIC to enable online ADD COLUMN
1 row in set (0.00 sec)

Columns are not converted from FIXED to DYNAMIC column format by renaming operations. For more information about COLUMN_FORMAT, see Section 13.1.18, “CREATE TABLE Syntax”.

列不会通过重命名操作从固定列格式转换为动态列格式。有关列格式的详细信息,请参阅第13.1.18节“创建表语法”。

The KEY, CONSTRAINT, and IGNORE keywords are supported in ALTER TABLE statements using ALGORITHM=INPLACE.

使用algorithm=inplace的alter table语句支持key、constraint和ignore关键字。

Beginning with NDB Cluster 7.5.7 and 7.6.3, setting MAX_ROWS to 0 using an online ALTER TABLE statement is disallowed. You must use a copying ALTER TABLE to perform this operation. (Bug #21960004)

从ndb cluster 7.5.7和7.6.3开始,不允许使用联机alter table语句将max_rows设置为0。必须使用复制alter表来执行此操作。(错误21960004)

21.5.15 Adding NDB Cluster Data Nodes Online

This section describes how to add NDB Cluster data nodes online—that is, without needing to shut down the cluster completely and restart it as part of the process.

本节介绍如何“联机”添加ndb群集数据节点,即无需完全关闭群集并作为过程的一部分重新启动它。

Important

Currently, you must add new data nodes to an NDB Cluster as part of a new node group. In addition, it is not possible to change the number of replicas (or the number of nodes per node group) online.

当前,必须将新数据节点作为新节点组的一部分添加到ndb群集。此外,无法在线更改副本数(或每个节点组的节点数)。

21.5.15.1 Adding NDB Cluster Data Nodes Online: General Issues

This section provides general information about the behavior of and current limitations in adding NDB Cluster nodes online.

本节提供有关联机添加ndb群集节点的行为和当前限制的一般信息。

Redistribution of Data.  The ability to add new nodes online includes a means to reorganize NDBCLUSTER table data and indexes so that they are distributed across all data nodes, including the new ones, by means of the ALTER TABLE ... REORGANIZE PARTITION statement. Table reorganization of both in-memory and Disk Data tables is supported. This redistribution does not currently include unique indexes (only ordered indexes are redistributed).

重新分配数据。在线添加新节点的能力包括重新组织ndbcluster表数据和索引的方法,以便它们通过alter table分布在所有数据节点上,包括新节点…重新组织分区语句。支持内存和磁盘数据表的表重组。此重新分发当前不包含唯一索引(仅重新分发有序索引)。

The redistribution for NDBCLUSTER tables already existing before the new data nodes were added is not automatic, but can be accomplished using simple SQL statements in mysql or another MySQL client application. However, all data and indexes added to tables created after a new node group has been added are distributed automatically among all cluster data nodes, including those added as part of the new node group.

在添加新数据节点之前已经存在的NdBaseCab簇表的重新分配不是自动的,但可以使用MySQL或另一个MySQL客户端应用程序中的简单SQL语句来完成。但是,在添加新节点组之后创建的表中添加的所有数据和索引将自动分布在所有群集数据节点之间,包括作为新节点组的一部分添加的数据和索引。

Partial starts.  It is possible to add a new node group without all of the new data nodes being started. It is also possible to add a new node group to a degraded cluster—that is, a cluster that is only partially started, or where one or more data nodes are not running. In the latter case, the cluster must have enough nodes running to be viable before the new node group can be added.

部分启动。可以在不启动所有新数据节点的情况下添加新节点组。还可以将新的节点组添加到降级的群集,即仅部分启动的群集,或一个或多个数据节点未运行的群集。在后一种情况下,在添加新的节点组之前,集群必须有足够的节点运行才能生存。

Effects on ongoing operations.  Normal DML operations using NDB Cluster data are not prevented by the creation or addition of a new node group, or by table reorganization. However, it is not possible to perform DDL concurrently with table reorganization—that is, no other DDL statements can be issued while an ALTER TABLE ... REORGANIZE PARTITION statement is executing. In addition, during the execution of ALTER TABLE ... REORGANIZE PARTITION (or the execution of any other DDL statement), it is not possible to restart cluster data nodes.

对持续经营的影响。使用ndb集群数据的正常dml操作不会因创建或添加新的节点组或表重组而被阻止。但是,不可能在表重组的同时执行ddl,也就是说,当alter table…重组分区语句正在执行。另外,在alter table的执行过程中…重新组织分区(或任何其他DDL语句的执行),不可能重新启动群集数据节点。

Failure handling.  Failures of data nodes during node group creation and table reorganization are handled as shown in the following table:

故障处理。节点组创建和表重组过程中数据节点的故障处理如下表所示:

Table 21.404 Data node failure handling during node group creation and table reorganization

表21.404节点组创建和表重组期间的数据节点故障处理

Failure during Failure in Old data node Failure in New data node System Failure
Node group creation
  • If a node other than the master fails:  The creation of the node group is always rolled forward.

    如果主节点以外的节点失败:节点组的创建始终向前滚动。

  • If the master fails: 

    如果主机出现故障:

    • If the internal commit point has been reached:  The creation of the node group is rolled forward.

      如果已到达内部提交点:则会前滚节点组的创建。

    • If the internal commit point has not yet been reached.  The creation of the node group is rolled back

      如果尚未到达内部提交点。节点组的创建将回滚

  • If a node other than the master fails:  The creation of the node group is always rolled forward.

    如果主节点以外的节点失败:节点组的创建始终向前滚动。

  • If the master fails: 

    如果主机出现故障:

    • If the internal commit point has been reached:  The creation of the node group is rolled forward.

      如果已到达内部提交点:则会前滚节点组的创建。

    • If the internal commit point has not yet been reached.  The creation of the node group is rolled back

      如果尚未到达内部提交点。节点组的创建将回滚

  • If the execution of CREATE NODEGROUP has reached the internal commit point:  When restarted, the cluster includes the new node group. Otherwise it without.

    如果create node group的执行已达到内部提交点:重新启动时,集群将包含新的节点组。否则就没有了。

  • If the execution of CREATE NODEGROUP has not yet reached the internal commit point:  When restarted, the cluster does not include the new node group.

    如果create node group的执行尚未到达内部提交点:重新启动时,集群不包括新的节点组。

Table reorganization
  • If a node other than the master fails:  The table reorganization is always rolled forward.

    如果主节点以外的节点失败:表重组总是前滚。

  • If the master fails: 

    如果主机出现故障:

    • If the internal commit point has been reached:  The table reorganization is rolled forward.

      如果已达到内部提交点:则将前滚表重组。

    • If the internal commit point has not yet been reached.  The table reorganization is rolled back.

      如果尚未到达内部提交点。表重组被回滚。

  • If a node other than the master fails:  The table reorganization is always rolled forward.

    如果主节点以外的节点失败:表重组总是前滚。

  • If the master fails: 

    如果主机出现故障:

    • If the internal commit point has been reached:  The table reorganization is rolled forward.

      如果已达到内部提交点:则将前滚表重组。

    • If the internal commit point has not yet been reached.  The table reorganization is rolled back.

      如果尚未到达内部提交点。表重组被回滚。

  • If the execution of an ALTER TABLE ... REORGANIZE PARTITION statement has reached the internal commit point:  When the cluster is restarted, the data and indexes belonging to table are distributed using the new data nodes.

    如果执行alter表…重新组织分区语句已到达内部提交点:当重新启动群集时,使用“新”数据节点分发属于表的数据和索引。

  • If the execution of an ALTER TABLE ... REORGANIZE PARTITION statement has not yet reached the internal commit point:  When the cluster is restarted, the data and indexes belonging to table are distributed using only the old data nodes.

    如果执行alter表…重新组织分区语句尚未到达内部提交点:当重新启动群集时,只使用“旧”数据节点来分发属于表的数据和索引。


Dropping node groups.  The ndb_mgm client supports a DROP NODEGROUP command, but it is possible to drop a node group only when no data nodes in the node group contain any data. Since there is currently no way to empty a specific data node or node group, this command works only the following two cases:

正在删除节点组。ndb_-mgm客户端支持drop node group命令,但只有在节点组中没有数据节点包含任何数据时,才可以删除节点组。由于当前无法“清空”特定数据节点或节点组,因此此命令仅适用于以下两种情况:

  1. After issuing CREATE NODEGROUP in the ndb_mgm client, but before issuing any ALTER TABLE ... REORGANIZE PARTITION statements in the mysql client.

    在ndb-mgm客户机中发出create nodegroup之后,但在发出任何alter表之前…重新组织MySQL客户端中的分区语句。

  2. After dropping all NDBCLUSTER tables using DROP TABLE.

    在使用drop table删除所有ndbcluster表之后。

    TRUNCATE TABLE does not work for this purpose because the data nodes continue to store the table definitions.

    truncate table不能用于此目的,因为数据节点继续存储表定义。

21.5.15.2 Adding NDB Cluster Data Nodes Online: Basic procedure

In this section, we list the basic steps required to add new data nodes to an NDB Cluster. This procedure applies whether you are using ndbd or ndbmtd binaries for the data node processes. For a more detailed example, see Section 21.5.15.3, “Adding NDB Cluster Data Nodes Online: Detailed Example”.

在本节中,我们将列出向ndb集群添加新数据节点所需的基本步骤。无论您对数据节点进程使用ndbd还是ndbmtd二进制文件,此过程都适用。有关更详细的示例,请参阅21.5.15.3节,“联机添加ndb集群数据节点:详细示例”。

Assuming that you already have a running NDB Cluster, adding data nodes online requires the following steps:

假设您已经有一个正在运行的ndb集群,那么在线添加数据节点需要以下步骤:

  1. Edit the cluster configuration config.ini file, adding new [ndbd] sections corresponding to the nodes to be added. In the case where the cluster uses multiple management servers, these changes need to be made to all config.ini files used by the management servers.

    编辑cluster configuration config.ini文件,添加与要添加的节点对应的新[ndbd]节。如果集群使用多个管理服务器,则需要对管理服务器使用的所有config.ini文件进行这些更改。

    You must be careful that node IDs for any new data nodes added in the config.ini file do not overlap node IDs used by existing nodes. In the event that you have API nodes using dynamically allocated node IDs and these IDs match node IDs that you want to use for new data nodes, it is possible to force any such API nodes to migrate, as described later in this procedure.

    必须注意,在配置文件IN文件中添加的任何新数据节点的节点ID不重叠现有节点使用的节点ID。如果api节点使用动态分配的节点id,并且这些id与要用于新数据节点的节点id相匹配,则可以强制任何此类api节点“迁移”,如本过程后面所述。

  2. Perform a rolling restart of all NDB Cluster management servers.

    对所有ndb群集管理服务器执行滚动重新启动。

    Important

    All management servers must be restarted with the --reload or --initial option to force the reading of the new configuration.

    必须使用--reload或--initial选项重新启动所有管理服务器,以强制读取新配置。

  3. Perform a rolling restart of all existing NDB Cluster data nodes. It is not necessary (or usually even desirable) to use --initial when restarting the existing data nodes.

    对所有现有的NDB集群数据节点执行滚动重启。在重新启动现有数据节点时,没有必要使用(或通常甚至是可取的)。

    If you are using API nodes with dynamically allocated IDs matching any node IDs that you wish to assign to new data nodes, you must restart all API nodes (including SQL nodes) before restarting any of the data nodes processes in this step. This causes any API nodes with node IDs that were previously not explicitly assigned to relinquish those node IDs and acquire new ones.

    如果使用的api节点的动态分配id与要分配给新数据节点的任何节点id相匹配,则在此步骤中重新启动任何数据节点进程之前,必须重新启动所有api节点(包括sql节点)。这将导致具有节点id的任何api节点(这些节点id以前未被显式分配)放弃这些节点id并获取新的节点id。

  4. Perform a rolling restart of any SQL or API nodes connected to the NDB Cluster.

    对连接到ndb集群的任何sql或api节点执行滚动重新启动。

  5. Start the new data nodes.

    启动新的数据节点。

    The new data nodes may be started in any order. They can also be started concurrently, as long as they are started after the rolling restarts of all existing data nodes have been completed, and before proceeding to the next step.

    新的数据节点可以按任何顺序启动。它们也可以同时启动,只要它们在所有现有数据节点的滚动重启完成之后开始,并在进行下一步之前。

  6. Execute one or more CREATE NODEGROUP commands in the NDB Cluster management client to create the new node group or node groups to which the new data nodes will belong.

    在ndb集群管理客户机中执行一个或多个create node group命令,以创建新数据节点将属于的一个或多个新节点组。

  7. Redistribute the cluster's data among all data nodes, including the new ones. Normally this is done by issuing an ALTER TABLE ... ALGORITHM=INPLACE, REORGANIZE PARTITION statement in the mysql client for each NDBCLUSTER table.

    在所有数据节点(包括新节点)之间重新分配集群的数据。通常这是通过发出alter表来完成的…算法=在每个MNSQL客户机中为每个NDCBROCH表重新分配分区语句。

    Exception: For tables created using the MAX_ROWS option, this statement does not work; instead, use ALTER TABLE ... ALGORITHM=INPLACE MAX_ROWS=... to reorganize such tables. You should also bear in mind that using MAX_ROWS to set the number of partitions in this fashion is deprecated in NDB 7.5.4 and later, where you should use PARTITION_BALANCE instead; see Section 13.1.18.10, “Setting NDB_TABLE Options”, for more information.

    异常:对于使用max_rows选项创建的表,此语句不起作用;请改用alter table…算法=就地最大行数=…重新组织这样的表。您还应该记住,在NDB 7.5.4和以后,使用Max列来设置分区的数量是不合适的,在这里,您应该使用StutyTyCalm来代替;请参阅第131.18.10节,“设置NdBPARTABLE选项”,以获取更多信息。

    Note

    This needs to be done only for tables already existing at the time the new node group is added. Data in tables created after the new node group is added is distributed automatically; however, data added to any given table tbl that existed before the new nodes were added is not distributed using the new nodes until that table has been reorganized.

    这只需要在添加新节点组时已经存在的表中完成。在添加新节点组之后创建的表中的数据是自动分发的,但是,添加到新节点之前存在的任何给定表TBL的数据不会被使用新节点分发,直到该表被重新组织为止。

  8. ALTER TABLE ... REORGANIZE PARTITION ALGORITHM=INPLACE reorganizes partitions but does not reclaim the space freed on the old nodes. You can do this by issuing, for each NDBCLUSTER table, an OPTIMIZE TABLE statement in the mysql client.

    更改表…重新组织分区算法=InPrice重组分区,但不回收在“旧”节点上释放的空间。可以通过在mysql客户机中为每个ndbcluster表发出optimize table语句来实现这一点。

    This works for space used by variable-width columns of in-memory NDB tables. OPTIMIZE TABLE is not supported for fixed-width columns of in-memory tables; it is also not supported for Disk Data tables.

    这适用于内存中ndb表的可变宽度列使用的空间。内存表的固定宽度列不支持优化表;磁盘数据表也不支持优化表。

You can add all the nodes desired, then issue several CREATE NODEGROUP commands in succession to add the new node groups to the cluster.

您可以添加所需的所有节点,然后连续发出几个create nodegroup命令将新节点组添加到集群中。

21.5.15.3 Adding NDB Cluster Data Nodes Online: Detailed Example

In this section we provide a detailed example illustrating how to add new NDB Cluster data nodes online, starting with an NDB Cluster having 2 data nodes in a single node group and concluding with a cluster having 4 data nodes in 2 node groups.

在本节中,我们提供了一个详细的示例,说明如何在线添加新的ndb集群数据节点,从在单个节点组中具有2个数据节点的ndb集群开始,到在2个节点组中具有4个数据节点的集群结束。

Starting configuration.  For purposes of illustration, we assume a minimal configuration, and that the cluster uses a config.ini file containing only the following information:

正在启动配置。为了便于说明,我们假设最小配置,并且集群使用只包含以下信息的config.ini文件:

[ndbd default]
DataMemory = 100M
IndexMemory = 100M
NoOfReplicas = 2
DataDir = /usr/local/mysql/var/mysql-cluster

[ndbd]
Id = 1
HostName = 198.51.100.1

[ndbd]
Id = 2
HostName = 198.51.100.2

[mgm]
HostName = 198.51.100.10
Id = 10

[api]
Id=20
HostName = 198.51.100.20

[api]
Id=21
HostName = 198.51.100.21
Note

We have left a gap in the sequence between data node IDs and other nodes. This make it easier later to assign node IDs that are not already in use to data nodes which are newly added.

我们在数据节点id和其他节点之间的序列中留下了一个空白。这使得以后将尚未使用的节点id分配给新添加的数据节点更加容易。

We also assume that you have already started the cluster using the appropriate command line or my.cnf options, and that running SHOW in the management client produces output similar to what is shown here:

我们还假设您已经使用适当的命令行或my.cnf选项启动了集群,并且在管理客户机中运行show会产生类似于此处所示的输出:

-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: 198.51.100.10:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @198.51.100.1  (5.7.28-ndb-7.5.16, Nodegroup: 0, *)
id=2    @198.51.100.2  (5.7.28-ndb-7.5.16, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=10   @198.51.100.10  (5.7.28-ndb-7.5.16)

[mysqld(API)]   2 node(s)
id=20   @198.51.100.20  (5.7.28-ndb-7.5.16)
id=21   @198.51.100.21  (5.7.28-ndb-7.5.16)

Finally, we assume that the cluster contains a single NDBCLUSTER table created as shown here:

最后,我们假设集群包含一个创建的ndbcluster表,如下所示:

USE n;

CREATE TABLE ips (
    id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
    country_code CHAR(2) NOT NULL,
    type CHAR(4) NOT NULL,
    ip_address VARCHAR(15) NOT NULL,
    addresses BIGINT UNSIGNED DEFAULT NULL,
    date BIGINT UNSIGNED DEFAULT NULL
)   ENGINE NDBCLUSTER;

The memory usage and related information shown later in this section was generated after inserting approximately 50000 rows into this table.

在本表后面显示的内存使用和相关信息是在将大约50000行插入到此表之后生成的。

Note

In this example, we show the single-threaded ndbd being used for the data node processes. You can also apply this example, if you are using the multithreaded ndbmtd by substituting ndbmtd for ndbd wherever it appears in the steps that follow.

在这个例子中,我们展示了用于数据节点进程的单线程ndbd。如果要使用多线程ndbmtd,也可以应用此示例,方法是将ndbmtd替换为ndbd,无论ndbd出现在下面的步骤中的何处。

Step 1: Update configuration file.  Open the cluster global configuration file in a text editor and add [ndbd] sections corresponding to the 2 new data nodes. (We give these data nodes IDs 3 and 4, and assume that they are to be run on host machines at addresses 198.51.100.3 and 198.51.100.4, respectively.) After you have added the new sections, the contents of the config.ini file should look like what is shown here, where the additions to the file are shown in bold type:

步骤1:更新配置文件。在文本编辑器中打开集群全局配置文件,并添加对应于2个新数据节点的[ndbd]部分。(我们给出这些数据节点ID 3和4,并假设它们将分别在地址198.51.100.3和198.51.100.4的主机上运行。)添加新节后,config.ini文件的内容应与此处显示的内容类似,其中对文件的添加以粗体显示:

[ndbd default]
DataMemory = 100M
IndexMemory = 100M
NoOfReplicas = 2
DataDir = /usr/local/mysql/var/mysql-cluster

[ndbd]
Id = 1
HostName = 198.51.100.1

[ndbd]
Id = 2
HostName = 198.51.100.2

[ndbd]
Id = 3
HostName = 198.51.100.3

[ndbd]
Id = 4
HostName = 198.51.100.4

[mgm]
HostName = 198.51.100.10
Id = 10

[api]
Id=20
HostName = 198.51.100.20

[api]
Id=21
HostName = 198.51.100.21

Once you have made the necessary changes, save the file.

完成必要的更改后,保存文件。

Step 2: Restart the management server.  Restarting the cluster management server requires that you issue separate commands to stop the management server and then to start it again, as follows:

步骤2:重新启动管理服务器。重新启动群集管理服务器需要发出单独的命令来停止管理服务器,然后重新启动它,如下所示:

  1. Stop the management server using the management client STOP command, as shown here:

    使用management client stop命令停止管理服务器,如下所示:

    ndb_mgm> 10 STOP
    Node 10 has shut down.
    Disconnecting to allow Management Server to shutdown
    
    shell>
    
  2. Because shutting down the management server causes the management client to terminate, you must start the management server from the system shell. For simplicity, we assume that config.ini is in the same directory as the management server binary, but in practice, you must supply the correct path to the configuration file. You must also supply the --reload or --initial option so that the management server reads the new configuration from the file rather than its configuration cache. If your shell's current directory is also the same as the directory where the management server binary is located, then you can invoke the management server as shown here:

    因为关闭管理服务器会导致管理客户端终止,所以必须从系统外壳启动管理服务器。为简单起见,我们假设config.ini与管理服务器二进制文件位于同一目录中,但实际上,必须提供配置文件的正确路径。还必须提供--reload或--initial选项,以便管理服务器从文件而不是其配置缓存中读取新配置。如果shell的当前目录也与管理服务器二进制文件所在的目录相同,则可以调用管理服务器,如下所示:

    shell> ndb_mgmd -f config.ini --reload
    2008-12-08 17:29:23 [MgmSrvr] INFO     -- NDB Cluster Management Server. 5.7.28-ndb-7.5.16
    2008-12-08 17:29:23 [MgmSrvr] INFO     -- Reading cluster configuration from 'config.ini'
    

If you check the output of SHOW in the management client after restarting the ndb_mgm process, you should now see something like this:

如果在重新启动ndb_-mgm进程后在管理客户端中检查show的输出,现在应该会看到如下内容:

-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: 198.51.100.10:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @198.51.100.1  (5.7.28-ndb-7.5.16, Nodegroup: 0, *)
id=2    @198.51.100.2  (5.7.28-ndb-7.5.16, Nodegroup: 0)
id=3 (not connected, accepting connect from 198.51.100.3)
id=4 (not connected, accepting connect from 198.51.100.4)

[ndb_mgmd(MGM)] 1 node(s)
id=10   @198.51.100.10  (5.7.28-ndb-7.5.16)

[mysqld(API)]   2 node(s)
id=20   @198.51.100.20  (5.7.28-ndb-7.5.16)
id=21   @198.51.100.21  (5.7.28-ndb-7.5.16)

Step 3: Perform a rolling restart of the existing data nodes.  This step can be accomplished entirely within the cluster management client using the RESTART command, as shown here:

步骤3:执行现有数据节点的滚动重启。此步骤可以完全在使用restart命令的群集管理客户端中完成,如下所示:

ndb_mgm> 1 RESTART
Node 1: Node shutdown initiated
Node 1: Node shutdown completed, restarting, no start.
Node 1 is being restarted

ndb_mgm> Node 1: Start initiated (version 7.5.16)
Node 1: Started (version 7.5.16)

ndb_mgm> 2 RESTART
Node 2: Node shutdown initiated
Node 2: Node shutdown completed, restarting, no start.
Node 2 is being restarted

ndb_mgm> Node 2: Start initiated (version 7.5.16)

ndb_mgm> Node 2: Started (version 7.5.16)
Important

After issuing each X RESTART command, wait until the management client reports Node X: Started (version ...) before proceeding any further.

发出每个x restart命令后,等待管理客户端报告节点x:started(version…),然后再继续。

You can verify that all existing data nodes were restarted using the updated configuration by checking the ndbinfo.nodes table in the mysql client.

通过检查MySQL客户端中的NDBIOF.NoC表,可以验证所有现有数据节点都是使用更新后的配置重新启动的。

Step 4: Perform a rolling restart of all cluster API nodes.  Shut down and restart each MySQL server acting as an SQL node in the cluster using mysqladmin shutdown followed by mysqld_safe (or another startup script). This should be similar to what is shown here, where password is the MySQL root password for a given MySQL server instance:

步骤4:对所有集群API节点执行滚动重新启动。使用mysqladmin shut down,然后使用mysqld\u safe(或另一个启动脚本)关闭并重新启动群集中充当sql节点的每个mysql服务器。这应该类似于此处所示,其中password是给定mysql服务器实例的mysql根密码:

shell> mysqladmin -uroot -ppassword shutdown
081208 20:19:56 mysqld_safe mysqld from pid file
/usr/local/mysql/var/tonfisk.pid ended
shell> mysqld_safe --ndbcluster --ndb-connectstring=198.51.100.10 &
081208 20:20:06 mysqld_safe Logging to '/usr/local/mysql/var/tonfisk.err'.
081208 20:20:06 mysqld_safe Starting mysqld daemon with databases
from /usr/local/mysql/var

Of course, the exact input and output depend on how and where MySQL is installed on the system, as well as which options you choose to start it (and whether or not some or all of these options are specified in a my.cnf file).

当然,准确的输入和输出取决于mysql在系统上的安装方式和位置,以及您选择启动它的选项(以及是否在my.cnf文件中指定了这些选项的某些或全部)。

Step 5: Perform an initial start of the new data nodes.  From a system shell on each of the hosts for the new data nodes, start the data nodes as shown here, using the --initial option:

步骤5:执行新数据节点的初始启动。从每个主机上新数据节点的系统shell中,使用--initial选项启动数据节点,如下所示:

shell> ndbd -c 198.51.100.10 --initial
Note

Unlike the case with restarting the existing data nodes, you can start the new data nodes concurrently; you do not need to wait for one to finish starting before starting the other.

与重新启动现有数据节点不同,您可以同时启动新的数据节点;在开始另一个数据节点之前,不需要等待一个结束。

Wait until both of the new data nodes have started before proceeding with the next step. Once the new data nodes have started, you can see in the output of the management client SHOW command that they do not yet belong to any node group (as indicated with bold type here):

在继续下一步之前,请等待两个新数据节点都已启动。一旦新的数据节点启动,您可以在管理客户机show命令的输出中看到它们还不属于任何节点组(此处用粗体显示):

ndb_mgm> SHOW
Connected to Management Server at: 198.51.100.10:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @198.51.100.1  (5.7.28-ndb-7.5.16, Nodegroup: 0, *)
id=2    @198.51.100.2  (5.7.28-ndb-7.5.16, Nodegroup: 0)
id=3    @198.51.100.3  (5.7.28-ndb-7.5.16, no nodegroup)
id=4    @198.51.100.4  (5.7.28-ndb-7.5.16, no nodegroup)

[ndb_mgmd(MGM)] 1 node(s)
id=10   @198.51.100.10  (5.7.28-ndb-7.5.16)

[mysqld(API)]   2 node(s)
id=20   @198.51.100.20  (5.7.28-ndb-7.5.16)
id=21   @198.51.100.21  (5.7.28-ndb-7.5.16)

Step 6: Create a new node group.  You can do this by issuing a CREATE NODEGROUP command in the cluster management client. This command takes as its argument a comma-separated list of the node IDs of the data nodes to be included in the new node group, as shown here:

步骤6:创建新的节点组。可以通过在群集管理客户端中发出create nodegroup命令来完成此操作。此命令将包含在新节点组中的数据节点的节点ID的逗号分隔列表作为参数,如下所示:

ndb_mgm> CREATE NODEGROUP 3,4
Nodegroup 1 created

By issuing SHOW again, you can verify that data nodes 3 and 4 have joined the new node group (again indicated in bold type):

通过再次发出show,可以验证数据节点3和4是否已加入新的节点组(再次以粗体显示):

ndb_mgm> SHOW
Connected to Management Server at: 198.51.100.10:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @198.51.100.1  (5.7.28-ndb-7.5.16, Nodegroup: 0, *)
id=2    @198.51.100.2  (5.7.28-ndb-7.5.16, Nodegroup: 0)
id=3    @198.51.100.3  (5.7.28-ndb-7.5.16, Nodegroup: 1)
id=4    @198.51.100.4  (5.7.28-ndb-7.5.16, Nodegroup: 1)

[ndb_mgmd(MGM)] 1 node(s)
id=10   @198.51.100.10  (5.7.28-ndb-7.5.16)

[mysqld(API)]   2 node(s)
id=20   @198.51.100.20  (5.7.28-ndb-7.5.16)
id=21   @198.51.100.21  (5.7.28-ndb-7.5.16)

Step 7: Redistribute cluster data.  When a node group is created, existing data and indexes are not automatically distributed to the new node group's data nodes, as you can see by issuing the appropriate REPORT command in the management client:

步骤7:重新分发群集数据。当创建一个节点组时,现有的数据和索引不会自动分配给新节点组的数据节点,如您可以通过在管理客户端中发布适当的报表命令所看到的:

ndb_mgm> ALL REPORT MEMORY

Node 1: Data usage is 5%(177 32K pages of total 3200)
Node 1: Index usage is 0%(108 8K pages of total 12832)
Node 2: Data usage is 5%(177 32K pages of total 3200)
Node 2: Index usage is 0%(108 8K pages of total 12832)
Node 3: Data usage is 0%(0 32K pages of total 3200)
Node 3: Index usage is 0%(0 8K pages of total 12832)
Node 4: Data usage is 0%(0 32K pages of total 3200)
Node 4: Index usage is 0%(0 8K pages of total 12832)

By using ndb_desc with the -p option, which causes the output to include partitioning information, you can see that the table still uses only 2 partitions (in the Per partition info section of the output, shown here in bold text):

通过将ndb_desc与-p选项一起使用(这会导致输出包含分区信息),可以看到该表仍然只使用2个分区(在输出的每个分区信息部分中,以粗体文本显示):

shell> ndb_desc -c 198.51.100.10 -d n ips -p
-- ips --
Version: 1
Fragment type: 9
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 6
Number of primary keys: 1
Length of frm data: 340
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
FragmentCount: 2
TableStatus: Retrieved
-- Attributes --
id Bigint PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
country_code Char(2;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
type Char(4;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
ip_address Varchar(15;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
addresses Bigunsigned NULL AT=FIXED ST=MEMORY
date Bigunsigned NULL AT=FIXED ST=MEMORY

-- Indexes --
PRIMARY KEY(id) - UniqueHashIndex
PRIMARY(id) - OrderedIndex

-- Per partition info --
Partition   Row count   Commit count  Frag fixed memory   Frag varsized memory
0           26086       26086         1572864             557056
1           26329       26329         1605632             557056

NDBT_ProgramExit: 0 - OK

You can cause the data to be redistributed among all of the data nodes by performing, for each NDB table, an ALTER TABLE ... ALGORITHM=INPLACE, REORGANIZE PARTITION statement in the mysql client.

通过对每个ndb表执行alter表,您可以使数据在所有数据节点之间重新分配…MySQL客户端中的算法= InPress,重新组织分区语句。

Important

ALTER TABLE ... ALGORITHM=INPLACE, REORGANIZE PARTITION does not work on tables that were created with the MAX_ROWS option. Instead, use ALTER TABLE ... ALGORITHM=INPLACE, MAX_ROWS=... to reorganize such tables.

更改表…算法= InPoT,重新组织分区在使用Max行选项创建的表上不起作用。相反,使用alter table…算法=就地,最大行数=…重新组织这样的表。

Keep in mind that using MAX_ROWS to set the number of partitions per table is deprecated in NDB 7.5.4 and later, where you should use PARTITION_BALANCE instead; see Section 13.1.18.10, “Setting NDB_TABLE Options”, for more information.

请记住,在NDB 7.5.4和以后,使用Max行来设置每个分区的数目是不正确的,在这里您应该使用PrimeTyTraceNo.;参阅第131.18.10节,“设置NdBPARTABLE选项”,以获取更多信息。

After issuing the statement ALTER TABLE ips ALGORITHM=INPLACE, REORGANIZE PARTITION, you can see using ndb_desc that the data for this table is now stored using 4 partitions, as shown here (with the relevant portions of the output in bold type):

在发布ALTE表IPS算法=InPoT之后,重新组织分区,您可以看到使用NDByDeSc,这个表的数据现在使用4个分区来存储,如这里所示(用粗体类型的输出的相关部分):

shell> ndb_desc -c 198.51.100.10 -d n ips -p
-- ips --
Version: 16777217
Fragment type: 9
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 6
Number of primary keys: 1
Length of frm data: 341
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
FragmentCount: 4
TableStatus: Retrieved
-- Attributes --
id Bigint PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
country_code Char(2;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
type Char(4;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
ip_address Varchar(15;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
addresses Bigunsigned NULL AT=FIXED ST=MEMORY
date Bigunsigned NULL AT=FIXED ST=MEMORY

-- Indexes --
PRIMARY KEY(id) - UniqueHashIndex
PRIMARY(id) - OrderedIndex

-- Per partition info --
Partition   Row count   Commit count  Frag fixed memory   Frag varsized memory
0           12981       52296         1572864             557056
1           13236       52515         1605632             557056
2           13105       13105         819200              294912
3           13093       13093         819200              294912

NDBT_ProgramExit: 0 - OK
Note

Normally, ALTER TABLE table_name [ALGORITHM=INPLACE,] REORGANIZE PARTITION is used with a list of partition identifiers and a set of partition definitions to create a new partitioning scheme for a table that has already been explicitly partitioned. Its use here to redistribute data onto a new NDB Cluster node group is an exception in this regard; when used in this way, no other keywords or identifiers follow REORGANIZE PARTITION.

通常,AtLeTabTable名称[Prime= InPoT,]重新组织分区与分区标识符列表和一组分区定义一起使用,以便为已经显式分区的表创建新的分区方案。在这里,将数据重新分配到一个新的NDB群集节点组是一个例外,当使用这种方式时,没有其他关键字或标识符跟随重组分区。

For more information, see Section 13.1.8, “ALTER TABLE Syntax”.

有关更多信息,请参见第13.1.8节“更改表语法”。

In addition, for each table, the ALTER TABLE statement should be followed by an OPTIMIZE TABLE to reclaim wasted space. You can obtain a list of all NDBCLUSTER tables using the following query against the INFORMATION_SCHEMA.TABLES table:

此外,对于每个表,alter table语句后面应该跟一个优化表,以回收浪费的空间。使用以下针对information schema.tables表的查询,可以获取所有ndbcluster表的列表:

SELECT TABLE_SCHEMA, TABLE_NAME
    FROM INFORMATION_SCHEMA.TABLES
    WHERE ENGINE = 'NDBCLUSTER';
Note

The INFORMATION_SCHEMA.TABLES.ENGINE value for an NDB Cluster table is always NDBCLUSTER, regardless of whether the CREATE TABLE statement used to create the table (or ALTER TABLE statement used to convert an existing table from a different storage engine) used NDB or NDBCLUSTER in its ENGINE option.

NDB集群表的NealthyStudio.Tabel.Engine值总是NdBu簇,而不管用于创建表的CREATETABLE语句(或用于从不同存储引擎转换现有表的ALTALTABLE语句)是否在其引擎选项中使用NDB或NDBROCKS。

You can see after performing these statements in the output of ALL REPORT MEMORY that the data and indexes are now redistributed between all cluster data nodes, as shown here:

在所有报表内存的输出中执行这些语句后,可以看到数据和索引现在在所有集群数据节点之间重新分配,如下所示:

ndb_mgm> ALL REPORT MEMORY

Node 1: Data usage is 5%(176 32K pages of total 3200)
Node 1: Index usage is 0%(76 8K pages of total 12832)
Node 2: Data usage is 5%(176 32K pages of total 3200)
Node 2: Index usage is 0%(76 8K pages of total 12832)
Node 3: Data usage is 2%(80 32K pages of total 3200)
Node 3: Index usage is 0%(51 8K pages of total 12832)
Node 4: Data usage is 2%(80 32K pages of total 3200)
Node 4: Index usage is 0%(50 8K pages of total 12832)
Note

Since only one DDL operation on NDBCLUSTER tables can be executed at a time, you must wait for each ALTER TABLE ... REORGANIZE PARTITION statement to finish before issuing the next one.

由于一次只能对ndbcluster表执行一个ddl操作,因此必须等待每个alter table…在发布下一个语句之前,重新组织分区语句。

It is not necessary to issue ALTER TABLE ... REORGANIZE PARTITION statements for NDBCLUSTER tables created after the new data nodes have been added; data added to such tables is distributed among all data nodes automatically. However, in NDBCLUSTER tables that existed prior to the addition of the new nodes, neither existing nor new data is distributed using the new nodes until these tables have been reorganized using ALTER TABLE ... REORGANIZE PARTITION.

不必发布alter table…重新组织新数据节点添加后创建的NdBeCopy表的分区语句;添加到这些表中的数据将自动分布在所有数据节点中。然而,在添加新节点之前存在的NDCBROCH表中,既不存在新数据也不使用新节点分发,直到使用ALTE表重新组织这些表为止…重新组织分区。

Alternative procedure, without rolling restart.  It is possible to avoid the need for a rolling restart by configuring the extra data nodes, but not starting them, when first starting the cluster. We assume, as before, that you wish to start with two data nodes—nodes 1 and 2—in one node group and later to expand the cluster to four data nodes, by adding a second node group consisting of nodes 3 and 4:

替代程序,无滚动重启。通过在首次启动集群时配置额外的数据节点而不是启动它们,可以避免滚动重启的需要。我们假设,与前面一样,您希望从两个数据节点开始,节点1和2位于一个节点组中,然后通过添加由节点3和4组成的第二个节点组,将集群扩展到四个数据节点:

[ndbd default]
DataMemory = 100M
IndexMemory = 100M
NoOfReplicas = 2
DataDir = /usr/local/mysql/var/mysql-cluster

[ndbd]
Id = 1
HostName = 198.51.100.1

[ndbd]
Id = 2
HostName = 198.51.100.2

[ndbd]
Id = 3
HostName = 198.51.100.3
Nodegroup = 65536

[ndbd]
Id = 4
HostName = 198.51.100.4
Nodegroup = 65536

[mgm]
HostName = 198.51.100.10
Id = 10

[api]
Id=20
HostName = 198.51.100.20

[api]
Id=21
HostName = 198.51.100.21

The data nodes to be brought online at a later time (nodes 3 and 4) can be configured with NodeGroup = 65536, in which case nodes 1 and 2 can each be started as shown here:

稍后要联机的数据节点(节点3和4)可以配置为nodegroup=65536,在这种情况下,节点1和2可以分别启动,如下所示:

shell> ndbd -c 198.51.100.10 --initial

The data nodes configured with NodeGroup = 65536 are treated by the management server as though you had started nodes 1 and 2 using --nowait-nodes=3,4 after waiting for a period of time determined by the setting for the StartNoNodeGroupTimeout data node configuration parameter. By default, this is 15 seconds (15000 milliseconds).

管理服务器将处理配置为nodegroup=65536的数据节点,就好像您在等待startnonodegrouptimeout数据节点配置参数设置确定的一段时间后使用--nowait nodes=3,4启动了节点1和2一样。默认情况下,这是15秒(15000毫秒)。

Note

StartNoNodegroupTimeout must be the same for all data nodes in the cluster; for this reason, you should always set it in the [ndbd default] section of the config.ini file, rather than for individual data nodes.

对于群集中的所有数据节点,startnonodegrouptimeout必须相同;因此,您应该始终在config.ini文件的[ndbd default]部分设置它,而不是为单个数据节点设置它。

When you are ready to add the second node group, you need only perform the following additional steps:

准备添加第二个节点组时,只需执行以下附加步骤:

  1. Start data nodes 3 and 4, invoking the data node process once for each new node:

    启动数据节点3和4,为每个新节点调用一次数据节点进程:

    shell> ndbd -c 198.51.100.10 --initial
    
  2. Issue the appropriate CREATE NODEGROUP command in the management client:

    在管理客户端中发出相应的create nodegroup命令:

    ndb_mgm> CREATE NODEGROUP 3,4
    
  3. In the mysql client, issue ALTER TABLE ... REORGANIZE PARTITION and OPTIMIZE TABLE statements for each existing NDBCLUSTER table. (As noted elsewhere in this section, existing NDB Cluster tables cannot use the new nodes for data distribution until this has been done.)

    在mysql客户机中,发出alter table…重新组织分区并优化每个现有的NDCBROCH表的表语句。(如本节其他地方所指出的,现有NDB群集表不能使用新节点进行数据分发,直到完成这项工作)。

21.5.16 Distributed Privileges Using Shared Grant Tables

NDB Cluster supports distribution of MySQL users and privileges across all SQL nodes in an NDB Cluster. This support is not enabled by default; you should follow the procedure outlined in this section in order to do so.

ndb集群支持在ndb集群中的所有sql节点上分发mysql用户和权限。默认情况下不启用此支持;您应该按照本节中概述的过程进行操作。

Normally, each MySQL server's user privilege tables in the mysql database must use the MyISAM storage engine, which means that a user account and its associated privileges created on one SQL node are not available on the cluster's other SQL nodes. An SQL file ndb_dist_priv.sql provided with the NDB Cluster distribution can be found in the share directory in the MySQL installation directory.

通常,mysql数据库中的每个mysql服务器的用户权限表都必须使用myisam存储引擎,这意味着在一个sql节点上创建的用户帐户及其相关权限在集群的其他sql节点上不可用。ndb cluster distribution提供的sql文件ndb_dist_priv.sql可以在mysql安装目录的share目录中找到。

The first step in enabling distributed privileges is to load this script into a MySQL Server that functions as an SQL node (which we refer to after this as the target SQL node or MySQL Server). You can do this by executing the following command from the system shell on the target SQL node after changing to its MySQL installation directory (where options stands for any additional options needed to connect to this SQL node):

启用分布式权限的第一步是将此脚本加载到一个mysql服务器中,该服务器充当sql节点(在这之后我们称之为目标sql节点或mysql服务器)。在更改到目标sql节点的mysql安装目录(其中options表示连接到该sql节点所需的任何其他选项)后,可以通过从目标sql节点上的系统shell执行以下命令来完成此操作:

shell> mysql options -uroot < share/ndb_dist_priv.sql

Importing ndb_dist_priv.sql creates a number of stored routines (six stored procedures and one stored function) in the mysql database on the target SQL node. After connecting to the SQL node in the mysql client (as the MySQL root user), you can verify that these were created as shown here:

导入ndb_dist_priv.sql会在目标sql节点上的mysql数据库中创建许多存储例程(六个存储过程和一个存储函数)。在连接到mysql客户机中的sql节点(作为mysql根用户)之后,您可以验证这些节点是按如下所示创建的:

mysql> SELECT ROUTINE_NAME, ROUTINE_SCHEMA, ROUTINE_TYPE
    ->     FROM INFORMATION_SCHEMA.ROUTINES
    ->     WHERE ROUTINE_NAME LIKE 'mysql_cluster%'
    ->     ORDER BY ROUTINE_TYPE;
+---------------------------------------------+----------------+--------------+
| ROUTINE_NAME                                | ROUTINE_SCHEMA | ROUTINE_TYPE |
+---------------------------------------------+----------------+--------------+
| mysql_cluster_privileges_are_distributed    | mysql          | FUNCTION     |
| mysql_cluster_backup_privileges             | mysql          | PROCEDURE    |
| mysql_cluster_move_grant_tables             | mysql          | PROCEDURE    |
| mysql_cluster_move_privileges               | mysql          | PROCEDURE    |
| mysql_cluster_restore_local_privileges      | mysql          | PROCEDURE    |
| mysql_cluster_restore_privileges            | mysql          | PROCEDURE    |
| mysql_cluster_restore_privileges_from_local | mysql          | PROCEDURE    |
+---------------------------------------------+----------------+--------------+
7 rows in set (0.01 sec)

The stored procedure named mysql_cluster_move_privileges creates backup copies of the existing privilege tables, then converts them to NDB.

存储过程名为MySqLyCultSuxMeViVelues创建现有特权表的备份副本,然后将它们转换为NDB。

mysql_cluster_move_privileges performs the backup and conversion in two steps. The first step is to call mysql_cluster_backup_privileges, which creates two sets of copies in the mysql database:

mysql_cluster_move_权限分两步执行备份和转换。第一步是调用mysql_cluster_backup_权限,在mysql数据库中创建两组副本:

  • A set of local copies that use the MyISAM storage engine. Their names are generated by adding the suffix _backup to the original privilege table names.

    使用myisam存储引擎的一组本地副本。它们的名称是通过将后缀备份添加到原始特权表名称中生成的。

  • A set of distributed copies that use the NDBCLUSTER storage engine. These tables are named by prefixing ndb_ and appending _backup to the names of the original tables.

    使用ndbcluster存储引擎的一组分布式副本。这些表是通过预先固定NdBuz和将原始备份的名称添加到原始表的名称来命名的。

After the copies are created, mysql_cluster_move_privileges invokes mysql_cluster_move_grant_tables, which contains the ALTER TABLE ... ENGINE = NDB statements that convert the mysql system tables to NDB.

创建副本后,mysql_cluster_move_权限将调用mysql_cluster_move_grant_表,其中包含alter表…engine=ndb语句,将mysql系统表转换为ndb。

Normally, you should not invoke either mysql_cluster_backup_privileges or mysql_cluster_move_grant_tables manually; these stored procedures are intended only for use by mysql_cluster_move_privileges.

通常,您不应该手动调用mysql_cluster_backup_特权或mysql_cluster_move_grant_表;这些存储过程仅用于mysql_cluster_move_特权。

Although the original privilege tables are backed up automatically, it is always a good idea to create backups manually of the existing privilege tables on all affected SQL nodes before proceeding. You can do this using mysqldump in a manner similar to what is shown here:

虽然原始特权表是自动备份的,但在继续进行之前,在所有受影响的SQL节点上手动创建现有特权表的备份始终是一个好主意。可以使用mysqldump以类似于此处所示的方式执行此操作:

shell> mysqldump options -uroot \
    mysql user db tables_priv columns_priv procs_priv proxies_priv > backup_file

To perform the conversion, you must be connected to the target SQL node using the mysql client (again, as the MySQL root user). Invoke the stored procedure like this:

要执行转换,必须使用mysql客户机(同样,作为mysql根用户)连接到目标sql节点。像这样调用存储过程:

mysql> CALL mysql.mysql_cluster_move_privileges();
Query OK, 0 rows affected (22.32 sec)

Depending on the number of rows in the privilege tables, this procedure may take some time to execute. If some of the privilege tables are empty, you may see one or more No data - zero rows fetched, selected, or processed warnings when mysql_cluster_move_privileges returns. In such cases, the warnings may be safely ignored. To verify that the conversion was successful, you can use the stored function mysql_cluster_privileges_are_distributed as shown here:

根据特权表中的行数,执行此过程可能需要一些时间。如果某些特权表为空,则当mysql_cluster_move_privileges返回时,可能会看到一个或多个没有数据的警告-零行已获取、选定或处理。在这种情况下,可以安全地忽略警告。要验证转换是否成功,可以使用存储函数mysql_cluster_privileges_are_distributed,如下所示:

mysql> SELECT CONCAT(
    ->    'Conversion ',
    ->    IF(mysql.mysql_cluster_privileges_are_distributed(), 'succeeded', 'failed'),
    ->    '.')
    ->    AS Result;
+-----------------------+
| Result                |
+-----------------------+
| Conversion succeeded. |
+-----------------------+
1 row in set (0.00 sec)

mysql_cluster_privileges_are_distributed checks for the existence of the distributed privilege tables and returns 1 if all of the privilege tables are distributed; otherwise, it returns 0.

MySqLCultSuxValueSeaRayl分布式检查是否存在分布式特权表,如果所有特权表都被分发,则返回1,否则返回0。

You can verify that the backups have been created using a query such as this one:

您可以使用以下查询来验证备份是否已创建:

mysql> SELECT TABLE_NAME, ENGINE FROM INFORMATION_SCHEMA.TABLES
    ->     WHERE TABLE_SCHEMA = 'mysql' AND TABLE_NAME LIKE '%backup'
    ->     ORDER BY ENGINE;
+-------------------------+------------+
| TABLE_NAME              | ENGINE     |
+-------------------------+------------+
| db_backup               | MyISAM     |
| user_backup             | MyISAM     |
| columns_priv_backup     | MyISAM     |
| tables_priv_backup      | MyISAM     |
| proxies_priv_backup     | MyISAM     |
| procs_priv_backup       | MyISAM     |
| ndb_columns_priv_backup | ndbcluster |
| ndb_user_backup         | ndbcluster |
| ndb_tables_priv_backup  | ndbcluster |
| ndb_proxies_priv_backup | ndbcluster |
| ndb_procs_priv_backup   | ndbcluster |
| ndb_db_backup           | ndbcluster |
+-------------------------+------------+
12 rows in set (0.00 sec)

Once the conversion to distributed privileges has been made, any time a MySQL user account is created, dropped, or has its privileges updated on any SQL node, the changes take effect immediately on all other MySQL servers attached to the cluster. Once privileges are distributed, any new MySQL Servers that connect to the cluster automatically participate in the distribution.

一旦转换为分布式权限,无论何时在任何SQL节点上创建、删除或更新MySQL用户帐户的权限,这些更改都将立即在连接到群集的所有其他MySQL服务器上生效。一旦特权被分发,连接到集群的任何新mysql服务器都会自动参与分发。

Note

For clients connected to SQL nodes at the time that mysql_cluster_move_privileges is executed, you may need to execute FLUSH PRIVILEGES on those SQL nodes, or to disconnect and then reconnect the clients, in order for those clients to be able to see the changes in privileges.

对于在执行mysql_cluster_move_权限时连接到sql节点的客户端,您可能需要在这些sql节点上执行刷新权限,或者断开连接,然后重新连接这些客户端,以便这些客户端能够看到权限的更改。

All MySQL user privileges are distributed across all connected MySQL Servers. This includes any privileges associated with views and stored routines, even though distribution of views and stored routines themselves is not currently supported.

所有mysql用户权限都分布在所有连接的mysql服务器上。这包括与视图和存储例程相关联的任何特权,即使当前不支持视图和存储例程本身的分发。

In the event that an SQL node becomes disconnected from the cluster while mysql_cluster_move_privileges is running, you must drop its privilege tables after reconnecting to the cluster, using a statement such as DROP TABLE IF EXISTS mysql.user mysql.db mysql.tables_priv mysql.columns_priv mysql.procs_priv. This causes the SQL node to use the shared privilege tables rather than its own local versions of them. This is not needed when connecting a new SQL node to the cluster for the first time.

如果SQL节点在运行mysql-cluster-move-u权限时与群集断开连接,则必须在重新连接到群集后删除其权限表,使用诸如drop table if exists mysql.user mysql.db mysql.tables_priv mysql.columns_priv mysql.procs_priv这样的语句,这会导致sql节点使用共享特权表,而不是使用它们自己的本地版本。第一次将新的sql节点连接到集群时不需要这样做。

In the event of an initial restart of the entire cluster (all data nodes shut down, then started again with --initial), the shared privilege tables are lost. If this happens, you can restore them using the original target SQL node either from the backups made by mysql_cluster_move_privileges or from a dump file created with mysqldump. If you need to use a new MySQL Server to perform the restoration, you should start it with --skip-grant-tables when connecting to the cluster for the first time; after this, you can restore the privilege tables locally, then distribute them again using mysql_cluster_move_privileges. After restoring and distributing the tables, you should restart this MySQL Server without the --skip-grant-tables option.

在整个集群的初始重新启动(所有数据节点关闭,然后用--initial重新启动)的情况下,共享特权表将丢失。如果发生这种情况,您可以使用原始目标sql节点从mysql_cluster_move_权限所做的备份或使用mysql dump创建的转储文件还原它们。如果需要使用新的MySQL服务器来执行恢复,那么在第一次连接到集群时,应该从--skip grant tables开始;之后,可以在本地恢复特权表,然后使用MySQL群集移动特权再次分发特权表。在恢复和分发表之后,应该在不使用--skip grant tables选项的情况下重新启动这个mysql服务器。

You can also restore the distributed tables using ndb_restore --restore-privilege-tables from a backup made using START BACKUP in the ndb_mgm client. (The MyISAM tables created by mysql_cluster_move_privileges are not backed up by the START BACKUP command.) ndb_restore does not restore the privilege tables by default; the --restore-privilege-tables option causes it to do so.

您还可以使用ndb_restore还原分布式表--从使用ndb_mgm客户端中的start backup进行的备份中还原特权表。(mysql_cluster_move_privileges创建的myisam表不由start backup命令备份。)ndb_restore默认情况下不还原特权表;--restore privilege tables选项使其还原特权表。

You can restore the SQL node's local privileges using either of two procedures. mysql_cluster_restore_privileges works as follows:

您可以使用两个过程中的任何一个来还原SQL节点的本地权限。mysql_cluster_restore_权限的工作方式如下:

  1. If copies of the mysql.ndb_*_backup tables are available, attempt to restore the system tables from these.

    如果mysql.ndb_*_备份表的副本可用,请尝试从中还原系统表。

  2. Otherwise, attempt to restore the system tables from the local backups named *_backup (without the ndb_ prefix).

    否则,尝试从名为*_backup(不带ndb_u前缀)的本地备份还原系统表。

The other procedure, named mysql_cluster_restore_local_privileges, restores the system tables from the local backups only, without checking the ndb_* backups.

另一个过程名为mysql_cluster_restore_local_privileges,它只从本地备份还原系统表,而不检查ndb_x备份。

The system tables re-created by mysql_cluster_restore_privileges or mysql_cluster_restore_local_privileges use the MySQL server default storage engine; they are not shared or distributed in any way, and do not use NDB Cluster's NDB storage engine.

由mysql_cluster_restore_privileges或mysql_cluster_restore_local_privileges重新创建的系统表使用mysql服务器默认存储引擎;它们不以任何方式共享或分发,也不使用ndb cluster的ndb存储引擎。

The additional stored procedure mysql_cluster_restore_privileges_from_local is intended for the use of mysql_cluster_restore_privileges and mysql_cluster_restore_local_privileges. It should not be invoked directly.

附加存储过程mysql_cluster_restore_privileges_from_local用于使用mysql_cluster_restore_privileges和mysql_cluster_restore_local_privileges。不应直接调用它。

Important

Applications that access NDB Cluster data directly, including NDB API and ClusterJ applications, are not subject to the MySQL privilege system. This means that, once you have distributed the grant tables, they can be freely accessed by such applications, just as they can any other NDB tables. In particular, you should keep in mind that NDB API and ClusterJ applications can read and write user names, host names, password hashes, and any other contents of the distributed grant tables without any restrictions.

直接访问ndb集群数据的应用程序,包括ndb api和clusterj应用程序,不受mysql特权系统的限制。这意味着,一旦分发了授权表,这些应用程序就可以自由地访问它们,就像它们可以访问任何其他ndb表一样。特别是,您应该记住,ndb api和clusterj应用程序可以读写用户名、主机名、密码散列和分布式授权表的任何其他内容,而不受任何限制。

21.5.17 NDB API Statistics Counters and Variables

A number of types of statistical counters relating to actions performed by or affecting Ndb objects are available. Such actions include starting and closing (or aborting) transactions; primary key and unique key operations; table, range, and pruned scans; threads blocked while waiting for the completion of various operations; and data and events sent and received by NDBCLUSTER. The counters are incremented inside the NDB kernel whenever NDB API calls are made or data is sent to or received by the data nodes. mysqld exposes these counters as system status variables; their values can be read in the output of SHOW STATUS, or by querying the INFORMATION_SCHEMA.SESSION_STATUS or INFORMATION_SCHEMA.GLOBAL_STATUS table. By comparing the values before and after statements operating on NDB tables, you can observe the corresponding actions taken on the API level, and thus the cost of performing the statement.

许多类型的统计计数器与由ndb对象执行或影响ndb对象的操作相关。这些操作包括启动和关闭(或中止)事务;主键和唯一键操作;表、范围和修剪的扫描;等待各种操作完成时阻塞的线程;ndbcluster发送和接收的数据和事件。每当进行ndb api调用或向数据节点发送或接收数据时,计数器在ndb内核中递增。mysqld将这些计数器公开为系统状态变量;它们的值可以在show status的输出中读取,也可以通过查询information-schema.session-status或information-schema.global-status表来读取。通过比较在ndb表上操作的before和after语句的值,可以观察在api级别上采取的相应操作,从而观察执行该语句的成本。

You can list all of these status variables using the following SHOW STATUS statement:

您可以使用以下show status语句列出所有这些状态变量:

mysql> SHOW STATUS LIKE 'ndb_api%';
+--------------------------------------------+----------+
| Variable_name                              | Value    |
+--------------------------------------------+----------+
| Ndb_api_wait_exec_complete_count_session   | 0        |
| Ndb_api_wait_scan_result_count_session     | 0        |
| Ndb_api_wait_meta_request_count_session    | 0        |
| Ndb_api_wait_nanos_count_session           | 0        |
| Ndb_api_bytes_sent_count_session           | 0        |
| Ndb_api_bytes_received_count_session       | 0        |
| Ndb_api_trans_start_count_session          | 0        |
| Ndb_api_trans_commit_count_session         | 0        |
| Ndb_api_trans_abort_count_session          | 0        |
| Ndb_api_trans_close_count_session          | 0        |
| Ndb_api_pk_op_count_session                | 0        |
| Ndb_api_uk_op_count_session                | 0        |
| Ndb_api_table_scan_count_session           | 0        |
| Ndb_api_range_scan_count_session           | 0        |
| Ndb_api_pruned_scan_count_session          | 0        |
| Ndb_api_scan_batch_count_session           | 0        |
| Ndb_api_read_row_count_session             | 0        |
| Ndb_api_trans_local_read_row_count_session | 0        |
| Ndb_api_event_data_count_injector          | 0        |
| Ndb_api_event_nondata_count_injector       | 0        |
| Ndb_api_event_bytes_count_injector         | 0        |
| Ndb_api_wait_exec_complete_count_slave     | 0        |
| Ndb_api_wait_scan_result_count_slave       | 0        |
| Ndb_api_wait_meta_request_count_slave      | 0        |
| Ndb_api_wait_nanos_count_slave             | 0        |
| Ndb_api_bytes_sent_count_slave             | 0        |
| Ndb_api_bytes_received_count_slave         | 0        |
| Ndb_api_trans_start_count_slave            | 0        |
| Ndb_api_trans_commit_count_slave           | 0        |
| Ndb_api_trans_abort_count_slave            | 0        |
| Ndb_api_trans_close_count_slave            | 0        |
| Ndb_api_pk_op_count_slave                  | 0        |
| Ndb_api_uk_op_count_slave                  | 0        |
| Ndb_api_table_scan_count_slave             | 0        |
| Ndb_api_range_scan_count_slave             | 0        |
| Ndb_api_pruned_scan_count_slave            | 0        |
| Ndb_api_scan_batch_count_slave             | 0        |
| Ndb_api_read_row_count_slave               | 0        |
| Ndb_api_trans_local_read_row_count_slave   | 0        |
| Ndb_api_wait_exec_complete_count           | 2        |
| Ndb_api_wait_scan_result_count             | 3        |
| Ndb_api_wait_meta_request_count            | 27       |
| Ndb_api_wait_nanos_count                   | 45612023 |
| Ndb_api_bytes_sent_count                   | 992      |
| Ndb_api_bytes_received_count               | 9640     |
| Ndb_api_trans_start_count                  | 2        |
| Ndb_api_trans_commit_count                 | 1        |
| Ndb_api_trans_abort_count                  | 0        |
| Ndb_api_trans_close_count                  | 2        |
| Ndb_api_pk_op_count                        | 1        |
| Ndb_api_uk_op_count                        | 0        |
| Ndb_api_table_scan_count                   | 1        |
| Ndb_api_range_scan_count                   | 0        |
| Ndb_api_pruned_scan_count                  | 0        |
| Ndb_api_scan_batch_count                   | 0        |
| Ndb_api_read_row_count                     | 1        |
| Ndb_api_trans_local_read_row_count         | 1        |
| Ndb_api_event_data_count                   | 0        |
| Ndb_api_event_nondata_count                | 0        |
| Ndb_api_event_bytes_count                  | 0        |
+--------------------------------------------+----------+
60 rows in set (0.02 sec)

These status variables are also available from the SESSION_STATUS and GLOBAL_STATUS tables of the INFORMATION_SCHEMA database, as shown here:

这些状态变量也可以从信息架构数据库的会话状态和全局状态表中获得,如下所示:

mysql> SELECT * FROM INFORMATION_SCHEMA.SESSION_STATUS 
    ->   WHERE VARIABLE_NAME LIKE 'ndb_api%';
+--------------------------------------------+----------------+
| VARIABLE_NAME                              | VARIABLE_VALUE |
+--------------------------------------------+----------------+
| NDB_API_WAIT_EXEC_COMPLETE_COUNT_SESSION   | 2              |
| NDB_API_WAIT_SCAN_RESULT_COUNT_SESSION     | 0              |
| NDB_API_WAIT_META_REQUEST_COUNT_SESSION    | 1              |
| NDB_API_WAIT_NANOS_COUNT_SESSION           | 8144375        |
| NDB_API_BYTES_SENT_COUNT_SESSION           | 68             |
| NDB_API_BYTES_RECEIVED_COUNT_SESSION       | 84             |
| NDB_API_TRANS_START_COUNT_SESSION          | 1              |
| NDB_API_TRANS_COMMIT_COUNT_SESSION         | 1              |
| NDB_API_TRANS_ABORT_COUNT_SESSION          | 0              |
| NDB_API_TRANS_CLOSE_COUNT_SESSION          | 1              |
| NDB_API_PK_OP_COUNT_SESSION                | 1              |
| NDB_API_UK_OP_COUNT_SESSION                | 0              |
| NDB_API_TABLE_SCAN_COUNT_SESSION           | 0              |
| NDB_API_RANGE_SCAN_COUNT_SESSION           | 0              |
| NDB_API_PRUNED_SCAN_COUNT_SESSION          | 0              |
| NDB_API_SCAN_BATCH_COUNT_SESSION           | 0              |
| NDB_API_READ_ROW_COUNT_SESSION             | 1              |
| NDB_API_TRANS_LOCAL_READ_ROW_COUNT_SESSION | 1              |
| NDB_API_EVENT_DATA_COUNT_INJECTOR          | 0              |
| NDB_API_EVENT_NONDATA_COUNT_INJECTOR       | 0              |
| NDB_API_EVENT_BYTES_COUNT_INJECTOR         | 0              |
| NDB_API_WAIT_EXEC_COMPLETE_COUNT_SLAVE     | 0              |
| NDB_API_WAIT_SCAN_RESULT_COUNT_SLAVE       | 0              |
| NDB_API_WAIT_META_REQUEST_COUNT_SLAVE      | 0              |
| NDB_API_WAIT_NANOS_COUNT_SLAVE             | 0              |
| NDB_API_BYTES_SENT_COUNT_SLAVE             | 0              |
| NDB_API_BYTES_RECEIVED_COUNT_SLAVE         | 0              |
| NDB_API_TRANS_START_COUNT_SLAVE            | 0              |
| NDB_API_TRANS_COMMIT_COUNT_SLAVE           | 0              |
| NDB_API_TRANS_ABORT_COUNT_SLAVE            | 0              |
| NDB_API_TRANS_CLOSE_COUNT_SLAVE            | 0              |
| NDB_API_PK_OP_COUNT_SLAVE                  | 0              |
| NDB_API_UK_OP_COUNT_SLAVE                  | 0              |
| NDB_API_TABLE_SCAN_COUNT_SLAVE             | 0              |
| NDB_API_RANGE_SCAN_COUNT_SLAVE             | 0              |
| NDB_API_PRUNED_SCAN_COUNT_SLAVE            | 0              |
| NDB_API_SCAN_BATCH_COUNT_SLAVE             | 0              |
| NDB_API_READ_ROW_COUNT_SLAVE               | 0              |
| NDB_API_TRANS_LOCAL_READ_ROW_COUNT_SLAVE   | 0              |
| NDB_API_WAIT_EXEC_COMPLETE_COUNT           | 4              |
| NDB_API_WAIT_SCAN_RESULT_COUNT             | 3              |
| NDB_API_WAIT_META_REQUEST_COUNT            | 28             |
| NDB_API_WAIT_NANOS_COUNT                   | 53756398       |
| NDB_API_BYTES_SENT_COUNT                   | 1060           |
| NDB_API_BYTES_RECEIVED_COUNT               | 9724           |
| NDB_API_TRANS_START_COUNT                  | 3              |
| NDB_API_TRANS_COMMIT_COUNT                 | 2              |
| NDB_API_TRANS_ABORT_COUNT                  | 0              |
| NDB_API_TRANS_CLOSE_COUNT                  | 3              |
| NDB_API_PK_OP_COUNT                        | 2              |
| NDB_API_UK_OP_COUNT                        | 0              |
| NDB_API_TABLE_SCAN_COUNT                   | 1              |
| NDB_API_RANGE_SCAN_COUNT                   | 0              |
| NDB_API_PRUNED_SCAN_COUNT                  | 0              |
| NDB_API_SCAN_BATCH_COUNT                   | 0              |
| NDB_API_READ_ROW_COUNT                     | 2              |
| NDB_API_TRANS_LOCAL_READ_ROW_COUNT         | 2              |
| NDB_API_EVENT_DATA_COUNT                   | 0              |
| NDB_API_EVENT_NONDATA_COUNT                | 0              |
| NDB_API_EVENT_BYTES_COUNT                  | 0              |
+--------------------------------------------+----------------+
60 rows in set (0.00 sec)

mysql> SELECT * FROM INFORMATION_SCHEMA.GLOBAL_STATUS
    ->     WHERE VARIABLE_NAME LIKE 'ndb_api%';
+--------------------------------------------+----------------+
| VARIABLE_NAME                              | VARIABLE_VALUE |
+--------------------------------------------+----------------+
| NDB_API_WAIT_EXEC_COMPLETE_COUNT_SESSION   | 2              |
| NDB_API_WAIT_SCAN_RESULT_COUNT_SESSION     | 0              |
| NDB_API_WAIT_META_REQUEST_COUNT_SESSION    | 1              |
| NDB_API_WAIT_NANOS_COUNT_SESSION           | 8144375        |
| NDB_API_BYTES_SENT_COUNT_SESSION           | 68             |
| NDB_API_BYTES_RECEIVED_COUNT_SESSION       | 84             |
| NDB_API_TRANS_START_COUNT_SESSION          | 1              |
| NDB_API_TRANS_COMMIT_COUNT_SESSION         | 1              |
| NDB_API_TRANS_ABORT_COUNT_SESSION          | 0              |
| NDB_API_TRANS_CLOSE_COUNT_SESSION          | 1              |
| NDB_API_PK_OP_COUNT_SESSION                | 1              |
| NDB_API_UK_OP_COUNT_SESSION                | 0              |
| NDB_API_TABLE_SCAN_COUNT_SESSION           | 0              |
| NDB_API_RANGE_SCAN_COUNT_SESSION           | 0              |
| NDB_API_PRUNED_SCAN_COUNT_SESSION          | 0              |
| NDB_API_SCAN_BATCH_COUNT_SESSION           | 0              |
| NDB_API_READ_ROW_COUNT_SESSION             | 1              |
| NDB_API_TRANS_LOCAL_READ_ROW_COUNT_SESSION | 1              |
| NDB_API_EVENT_DATA_COUNT_INJECTOR          | 0              |
| NDB_API_EVENT_NONDATA_COUNT_INJECTOR       | 0              |
| NDB_API_EVENT_BYTES_COUNT_INJECTOR         | 0              |
| NDB_API_WAIT_EXEC_COMPLETE_COUNT_SLAVE     | 0              |
| NDB_API_WAIT_SCAN_RESULT_COUNT_SLAVE       | 0              |
| NDB_API_WAIT_META_REQUEST_COUNT_SLAVE      | 0              |
| NDB_API_WAIT_NANOS_COUNT_SLAVE             | 0              |
| NDB_API_BYTES_SENT_COUNT_SLAVE             | 0              |
| NDB_API_BYTES_RECEIVED_COUNT_SLAVE         | 0              |
| NDB_API_TRANS_START_COUNT_SLAVE            | 0              |
| NDB_API_TRANS_COMMIT_COUNT_SLAVE           | 0              |
| NDB_API_TRANS_ABORT_COUNT_SLAVE            | 0              |
| NDB_API_TRANS_CLOSE_COUNT_SLAVE            | 0              |
| NDB_API_PK_OP_COUNT_SLAVE                  | 0              |
| NDB_API_UK_OP_COUNT_SLAVE                  | 0              |
| NDB_API_TABLE_SCAN_COUNT_SLAVE             | 0              |
| NDB_API_RANGE_SCAN_COUNT_SLAVE             | 0              |
| NDB_API_PRUNED_SCAN_COUNT_SLAVE            | 0              |
| NDB_API_SCAN_BATCH_COUNT_SLAVE             | 0              |
| NDB_API_READ_ROW_COUNT_SLAVE               | 0              |
| NDB_API_TRANS_LOCAL_READ_ROW_COUNT_SLAVE   | 0              |
| NDB_API_WAIT_EXEC_COMPLETE_COUNT           | 4              |
| NDB_API_WAIT_SCAN_RESULT_COUNT             | 3              |
| NDB_API_WAIT_META_REQUEST_COUNT            | 28             |
| NDB_API_WAIT_NANOS_COUNT                   | 53756398       |
| NDB_API_BYTES_SENT_COUNT                   | 1060           |
| NDB_API_BYTES_RECEIVED_COUNT               | 9724           |
| NDB_API_TRANS_START_COUNT                  | 3              |
| NDB_API_TRANS_COMMIT_COUNT                 | 2              |
| NDB_API_TRANS_ABORT_COUNT                  | 0              |
| NDB_API_TRANS_CLOSE_COUNT                  | 3              |
| NDB_API_PK_OP_COUNT                        | 2              |
| NDB_API_UK_OP_COUNT                        | 0              |
| NDB_API_TABLE_SCAN_COUNT                   | 1              |
| NDB_API_RANGE_SCAN_COUNT                   | 0              |
| NDB_API_PRUNED_SCAN_COUNT                  | 0              |
| NDB_API_SCAN_BATCH_COUNT                   | 0              |
| NDB_API_READ_ROW_COUNT                     | 2              |
| NDB_API_TRANS_LOCAL_READ_ROW_COUNT         | 2              |
| NDB_API_EVENT_DATA_COUNT                   | 0              |
| NDB_API_EVENT_NONDATA_COUNT                | 0              |
| NDB_API_EVENT_BYTES_COUNT                  | 0              |
+--------------------------------------------+----------------+
60 rows in set (0.00 sec)

Each Ndb object has its own counters. NDB API applications can read the values of the counters for use in optimization or monitoring. For multithreaded clients which use more than one Ndb object concurrently, it is also possible to obtain a summed view of counters from all Ndb objects belonging to a given Ndb_cluster_connection.

每个ndb对象都有自己的计数器。ndb api应用程序可以读取计数器的值,以用于优化或监视。对于同时使用多个ndb对象的多线程客户端,还可以从属于给定ndb_cluster_连接的所有ndb对象获取计数器的汇总视图。

Four sets of these counters are exposed. One set applies to the current session only; the other 3 are global. This is in spite of the fact that their values can be obtained as either session or global status variables in the mysql client. This means that specifying the SESSION or GLOBAL keyword with SHOW STATUS has no effect on the values reported for NDB API statistics status variables, and the value for each of these variables is the same whether the value is obtained from the equivalent column of the SESSION_STATUS or the GLOBAL_STATUS table.

这些计数器有四组暴露。一组仅适用于当前会话;另三组是全局会话。尽管它们的值可以作为mysql客户机中的会话或全局状态变量来获取,但这是不争的事实。这意味着,使用show status指定session或global关键字不会影响为ndb api statistics状态变量报告的值,并且无论该值是从session_status的等效列还是从global_status表中获取的,这些变量的值都是相同的。

  • Session counters (session specific)

    会话计数器(特定于会话)

    Session counters relate to the Ndb objects in use by (only) the current session. Use of such objects by other MySQL clients does not influence these counts.

    会话计数器与当前会话(仅)使用的ndb对象相关。其他mysql客户端使用这些对象不会影响这些计数。

    In order to minimize confusion with standard MySQL session variables, we refer to the variables that correspond to these NDB API session counters as _session variables, with a leading underscore.

    为了尽量减少与标准mysql会话变量的混淆,我们将与这些ndb api会话计数器相对应的变量称为带前导下划线的“_session variables”。

  • Slave counters (global)

    从计数器(全局)

    This set of counters relates to the Ndb objects used by the replication slave SQL thread, if any. If this mysqld does not act as a replication slave, or does not use NDB tables, then all of these counts are 0.

    此计数器集与复制从属SQL线程(如果有)使用的NDB对象相关。如果这个mysqld不充当复制从机,或者不使用ndb表,那么所有这些计数都是0。

    We refer to the related status variables as _slave variables (with a leading underscore).

    我们将相关的状态变量称为“从变量”(带前导下划线)。

  • Injector counters (global)

    喷油器计数器(全局)

    Injector counters relate to the Ndb object used to listen to cluster events by the binary log injector thread. Even when not writing a binary log, mysqld processes attached to an NDB Cluster continue to listen for some events, such as schema changes.

    注入器计数器与用于通过二进制日志注入器线程侦听群集事件的ndb对象相关。即使不编写二进制日志,附加到ndb集群的mysqld进程也会继续侦听某些事件,例如模式更改。

    We refer to the status variables that correspond to NDB API injector counters as _injector variables (with a leading underscore).

    我们将对应于ndb api注入器计数器的状态变量称为“_injector variables”(带前导下划线)。

  • Server (Global) counters (global)

    服务器(全局)计数器(全局)

    This set of counters relates to all Ndb objects currently used by this mysqld. This includes all MySQL client applications, the slave SQL thread (if any), the binlog injector, and the NDB utility thread.

    此计数器集与此mysqld当前使用的所有ndb对象相关。这包括所有mysql客户端应用程序、从sql线程(如果有的话)、binlog注入器和ndb实用程序线程。

    We refer to the status variables that correspond to these counters as global variables or mysqld-level variables.

    我们将与这些计数器对应的状态变量称为“全局变量”或“mysqld级别变量”。

You can obtain values for a particular set of variables by additionally filtering for the substring session, slave, or injector in the variable name (along with the common prefix Ndb_api). For _session variables, this can be done as shown here:

您可以通过在变量名(以及公共前缀ndb_api)中额外过滤子字符串会话、从进程或注入器来获取特定变量集的值。对于会话变量,如下所示:

mysql> SHOW STATUS LIKE 'ndb_api%session';
+--------------------------------------------+---------+
| Variable_name                              | Value   |
+--------------------------------------------+---------+
| Ndb_api_wait_exec_complete_count_session   | 2       |
| Ndb_api_wait_scan_result_count_session     | 0       |
| Ndb_api_wait_meta_request_count_session    | 1       |
| Ndb_api_wait_nanos_count_session           | 8144375 |
| Ndb_api_bytes_sent_count_session           | 68      |
| Ndb_api_bytes_received_count_session       | 84      |
| Ndb_api_trans_start_count_session          | 1       |
| Ndb_api_trans_commit_count_session         | 1       |
| Ndb_api_trans_abort_count_session          | 0       |
| Ndb_api_trans_close_count_session          | 1       |
| Ndb_api_pk_op_count_session                | 1       |
| Ndb_api_uk_op_count_session                | 0       |
| Ndb_api_table_scan_count_session           | 0       |
| Ndb_api_range_scan_count_session           | 0       |
| Ndb_api_pruned_scan_count_session          | 0       |
| Ndb_api_scan_batch_count_session           | 0       |
| Ndb_api_read_row_count_session             | 1       |
| Ndb_api_trans_local_read_row_count_session | 1       |
+--------------------------------------------+---------+
18 rows in set (0.50 sec)

To obtain a listing of the NDB API mysqld-level status variables, filter for variable names beginning with ndb_api and ending in _count, like this:

要获取ndb api mysqld级别状态变量的列表,请筛选以ndb_api开头、以_count结尾的变量名,如下所示:

mysql> SELECT * FROM INFORMATION_SCHEMA.SESSION_STATUS
    ->     WHERE VARIABLE_NAME LIKE 'ndb_api%count';
+------------------------------------+----------------+
| VARIABLE_NAME                      | VARIABLE_VALUE |
+------------------------------------+----------------+
| NDB_API_WAIT_EXEC_COMPLETE_COUNT   | 4              |
| NDB_API_WAIT_SCAN_RESULT_COUNT     | 3              |
| NDB_API_WAIT_META_REQUEST_COUNT    | 28             |
| NDB_API_WAIT_NANOS_COUNT           | 53756398       |
| NDB_API_BYTES_SENT_COUNT           | 1060           |
| NDB_API_BYTES_RECEIVED_COUNT       | 9724           |
| NDB_API_TRANS_START_COUNT          | 3              |
| NDB_API_TRANS_COMMIT_COUNT         | 2              |
| NDB_API_TRANS_ABORT_COUNT          | 0              |
| NDB_API_TRANS_CLOSE_COUNT          | 3              |
| NDB_API_PK_OP_COUNT                | 2              |
| NDB_API_UK_OP_COUNT                | 0              |
| NDB_API_TABLE_SCAN_COUNT           | 1              |
| NDB_API_RANGE_SCAN_COUNT           | 0              |
| NDB_API_PRUNED_SCAN_COUNT          | 0              |
| NDB_API_SCAN_BATCH_COUNT           | 0              |
| NDB_API_READ_ROW_COUNT             | 2              |
| NDB_API_TRANS_LOCAL_READ_ROW_COUNT | 2              |
| NDB_API_EVENT_DATA_COUNT           | 0              |
| NDB_API_EVENT_NONDATA_COUNT        | 0              |
| NDB_API_EVENT_BYTES_COUNT          | 0              |
+------------------------------------+----------------+
21 rows in set (0.09 sec)

Not all counters are reflected in all 4 sets of status variables. For the event counters DataEventsRecvdCount, NondataEventsRecvdCount, and EventBytesRecvdCount, only _injector and mysqld-level NDB API status variables are available:

并非所有计数器都反映在所有4组状态变量中。对于事件计数器dataeventsrecvdcount、nondataeventsrecvdcount和eventbytesrecvdcount,只有喷油器和mysqld级别的ndb api状态变量可用:

mysql> SHOW STATUS LIKE 'ndb_api%event%';
+--------------------------------------+-------+
| Variable_name                        | Value |
+--------------------------------------+-------+
| Ndb_api_event_data_count_injector    | 0     |
| Ndb_api_event_nondata_count_injector | 0     |
| Ndb_api_event_bytes_count_injector   | 0     |
| Ndb_api_event_data_count             | 0     |
| Ndb_api_event_nondata_count          | 0     |
| Ndb_api_event_bytes_count            | 0     |
+--------------------------------------+-------+
6 rows in set (0.00 sec)

_injector status variables are not implemented for any other NDB API counters, as shown here:

_没有为任何其他ndb api计数器实现注入器状态变量,如下所示:

mysql> SHOW STATUS LIKE 'ndb_api%injector%';
+--------------------------------------+-------+
| Variable_name                        | Value |
+--------------------------------------+-------+
| Ndb_api_event_data_count_injector    | 0     |
| Ndb_api_event_nondata_count_injector | 0     |
| Ndb_api_event_bytes_count_injector   | 0     |
+--------------------------------------+-------+
3 rows in set (0.00 sec)

The names of the status variables can easily be associated with the names of the corresponding counters. Each NDB API statistics counter is listed in the following table with a description as well as the names of any MySQL server status variables corresponding to this counter.

状态变量的名称可以很容易地与相应计数器的名称相关联。下表中列出了每个ndb api统计计数器,并提供了与此计数器对应的任何mysql服务器状态变量的描述和名称。

Table 21.405 NDB API statistics counters

表21.405 ndb api统计计数器

Counter Name Description Status Variables (by statistic type):
  • Session

    会议

  • Slave

    奴隶

  • Injector

    注射器

  • Server

    服务器

WaitExecCompleteCount Number of times thread has been blocked while waiting for execution of an operation to complete. Includes all execute() calls as well as implicit executes for blob operations and auto-increment not visible to clients.
WaitScanResultCount Number of times thread has been blocked while waiting for a scan-based signal, such waiting for additional results, or for a scan to close.
WaitMetaRequestCount Number of times thread has been blocked waiting for a metadata-based signal; this can occur when waiting for a DDL operation or for an epoch to be started (or ended).
WaitNanosCount Total time (in nanoseconds) spent waiting for some type of signal from the data nodes.
BytesSentCount Amount of data (in bytes) sent to the data nodes
BytesRecvdCount Amount of data (in bytes) received from the data nodes
TransStartCount Number of transactions started.
TransCommitCount Number of transactions committed.
TransAbortCount Number of transactions aborted.
TransCloseCount Number of transactions aborted. (This value may be greater than the sum of TransCommitCount and TransAbortCount.)
PkOpCount Number of operations based on or using primary keys. This count includes blob-part table operations, implicit unlocking operations, and auto-increment operations, as well as primary key operations normally visible to MySQL clients.
UkOpCount Number of operations based on or using unique keys.
TableScanCount Number of table scans that have been started. This includes scans of internal tables.
RangeScanCount Number of range scans that have been started.
PrunedScanCount Number of scans that have been pruned to a single partition.
ScanBatchCount Number of batches of rows received. (A batch in this context is a set of scan results from a single fragment.)
ReadRowCount Total number of rows that have been read. Includes rows read using primary key, unique key, and scan operations.
TransLocalReadRowCount Number of rows read from the data same node on which the transaction was being run.
DataEventsRecvdCount Number of row change events received.
NondataEventsRecvdCount Number of events received, other than row change events.
EventBytesRecvdCount Number of bytes of events received.

To see all counts of committed transactions—that is, all TransCommitCount counter status variables—you can filter the results of SHOW STATUS for the substring trans_commit_count, like this:

要查看已提交事务的所有计数,即所有trans commit count counter状态变量,可以筛选子字符串trans-commit计数的show status结果,如下所示:

mysql> SHOW STATUS LIKE '%trans_commit_count%';
+------------------------------------+-------+
| Variable_name                      | Value |
+------------------------------------+-------+
| Ndb_api_trans_commit_count_session | 1     |
| Ndb_api_trans_commit_count_slave   | 0     |
| Ndb_api_trans_commit_count         | 2     |
+------------------------------------+-------+
3 rows in set (0.00 sec)

From this you can determine that 1 transaction has been committed in the current mysql client session, and 2 transactions have been committed on this mysqld since it was last restarted.

由此可以确定当前mysql客户机会话中提交了1个事务,并且自上次重新启动mysqld以来,已在此mysqld上提交了2个事务。

You can see how various NDB API counters are incremented by a given SQL statement by comparing the values of the corresponding _session status variables immediately before and after performing the statement. In this example, after getting the initial values from SHOW STATUS, we create in the test database an NDB table, named t, that has a single column:

通过比较执行语句前后相应会话状态变量的值,可以看到给定sql语句如何增加各种ndb api计数器。在本例中,从show status获取初始值后,我们在测试数据库中创建一个名为t的ndb表,该表有一个列:

mysql> SHOW STATUS LIKE 'ndb_api%session%';
+--------------------------------------------+--------+
| Variable_name                              | Value  |
+--------------------------------------------+--------+
| Ndb_api_wait_exec_complete_count_session   | 2      |
| Ndb_api_wait_scan_result_count_session     | 0      |
| Ndb_api_wait_meta_request_count_session    | 3      |
| Ndb_api_wait_nanos_count_session           | 820705 |
| Ndb_api_bytes_sent_count_session           | 132    |
| Ndb_api_bytes_received_count_session       | 372    |
| Ndb_api_trans_start_count_session          | 1      |
| Ndb_api_trans_commit_count_session         | 1      |
| Ndb_api_trans_abort_count_session          | 0      |
| Ndb_api_trans_close_count_session          | 1      |
| Ndb_api_pk_op_count_session                | 1      |
| Ndb_api_uk_op_count_session                | 0      |
| Ndb_api_table_scan_count_session           | 0      |
| Ndb_api_range_scan_count_session           | 0      |
| Ndb_api_pruned_scan_count_session          | 0      |
| Ndb_api_scan_batch_count_session           | 0      |
| Ndb_api_read_row_count_session             | 1      |
| Ndb_api_trans_local_read_row_count_session | 1      |
+--------------------------------------------+--------+
18 rows in set (0.00 sec)

mysql> USE test;
Database changed
mysql> CREATE TABLE t (c INT) ENGINE NDBCLUSTER;
Query OK, 0 rows affected (0.85 sec)

Now you can execute a new SHOW STATUS statement and observe the changes, as shown here (with the changed rows highlighted in the output):

现在您可以执行一个新的show status语句并观察更改,如下所示(在输出中突出显示更改的行):

mysql> SHOW STATUS LIKE 'ndb_api%session%';
+--------------------------------------------+-----------+
| Variable_name                              | Value     |
+--------------------------------------------+-----------+
| Ndb_api_wait_exec_complete_count_session   | 8         |
| Ndb_api_wait_scan_result_count_session     | 0         |
| Ndb_api_wait_meta_request_count_session    | 17        |
| Ndb_api_wait_nanos_count_session           | 706871709 |
| Ndb_api_bytes_sent_count_session           | 2376      |
| Ndb_api_bytes_received_count_session       | 3844      |
| Ndb_api_trans_start_count_session          | 4         |
| Ndb_api_trans_commit_count_session         | 4         |
| Ndb_api_trans_abort_count_session          | 0         |
| Ndb_api_trans_close_count_session          | 4         |
| Ndb_api_pk_op_count_session                | 6         |
| Ndb_api_uk_op_count_session                | 0         |
| Ndb_api_table_scan_count_session           | 0         |
| Ndb_api_range_scan_count_session           | 0         |
| Ndb_api_pruned_scan_count_session          | 0         |
| Ndb_api_scan_batch_count_session           | 0         |
| Ndb_api_read_row_count_session             | 2         |
| Ndb_api_trans_local_read_row_count_session | 1         |
+--------------------------------------------+-----------+
18 rows in set (0.00 sec)

Similarly, you can see the changes in the NDB API statistics counters caused by inserting a row into t: Insert the row, then run the same SHOW STATUS statement used in the previous example, as shown here:

类似地,您可以在ndb api统计计数器中看到由于将行插入t而导致的更改:插入行,然后运行与上一示例中使用的show status语句相同的语句,如下所示:

mysql> INSERT INTO t VALUES (100);
Query OK, 1 row affected (0.00 sec)

mysql> SHOW STATUS LIKE 'ndb_api%session%';
+--------------------------------------------+-----------+
| Variable_name                              | Value     |
+--------------------------------------------+-----------+
| Ndb_api_wait_exec_complete_count_session   | 11        |
| Ndb_api_wait_scan_result_count_session     | 6         |
| Ndb_api_wait_meta_request_count_session    | 20        |
| Ndb_api_wait_nanos_count_session           | 707370418 |
| Ndb_api_bytes_sent_count_session           | 2724      |
| Ndb_api_bytes_received_count_session       | 4116      |
| Ndb_api_trans_start_count_session          | 7         |
| Ndb_api_trans_commit_count_session         | 6         |
| Ndb_api_trans_abort_count_session          | 0         |
| Ndb_api_trans_close_count_session          | 7         |
| Ndb_api_pk_op_count_session                | 8         |
| Ndb_api_uk_op_count_session                | 0         |
| Ndb_api_table_scan_count_session           | 1         |
| Ndb_api_range_scan_count_session           | 0         |
| Ndb_api_pruned_scan_count_session          | 0         |
| Ndb_api_scan_batch_count_session           | 0         |
| Ndb_api_read_row_count_session             | 3         |
| Ndb_api_trans_local_read_row_count_session | 2         |
+--------------------------------------------+-----------+
18 rows in set (0.00 sec)

We can make a number of observations from these results:

我们可以从这些结果中进行一些观察:

  • Although we created t with no explicit primary key, 5 primary key operations were performed in doing so (the difference in the before and after values of Ndb_api_pk_op_count_session, or 6 minus 1). This reflects the creation of the hidden primary key that is a feature of all tables using the NDB storage engine.

    虽然我们创建的t没有显式的主键,但是执行了5个主键操作(ndb_api_pk_op_count_session的“before”和“after”值的差异,或者6减去1)。这反映了隐藏主键的创建,该主键是使用ndb存储引擎的所有表的功能。

  • By comparing successive values for Ndb_api_wait_nanos_count_session, we can see that the NDB API operations implementing the CREATE TABLE statement waited much longer (706871709 - 820705 = 706051004 nanoseconds, or approximately 0.7 second) for responses from the data nodes than those executed by the INSERT (707370418 - 706871709 = 498709 ns or roughly .0005 second). The execution times reported for these statements in the mysql client correlate roughly with these figures.

    通过比较ndb_api_wait_nanos_count_会话的连续值,我们可以看到实现create table语句的ndb api操作等待的时间更长(706871709-820705=706051004 nanos,或大约0.7秒)用于来自数据节点的响应而不是由插入执行的响应(707370418 - 706871709=498709 ns或粗略地。0005秒)。mysql客户机中报告的这些语句的执行时间与这些数字大致相关。

    On platforms without sufficient (nanosecond) time resolution, small changes in the value of the WaitNanosCount NDB API counter due to SQL statements that execute very quickly may not always be visible in the values of Ndb_api_wait_nanos_count_session, Ndb_api_wait_nanos_count_slave, or Ndb_api_wait_nanos_count.

    在没有足够(纳秒)时间分辨率的平台上,由于执行速度非常快的sql语句,wait nanos count ndb api计数器的值可能不总是在ndb api_wait_nanoscount_session、ndb api_wait_nanoscount_slave或ndb api_wait_nanoscount的值中可见。

  • The INSERT statement incremented both the ReadRowCount and TransLocalReadRowCount NDB API statistics counters, as reflected by the increased values of Ndb_api_read_row_count_session and Ndb_api_trans_local_read_row_count_session.

    insert语句使read row count和translocalreadrowcount ndb api统计计数器同时递增,这反映在ndb api_read_row_count_session和ndb api_trans_local_read_row_count_session值的增加上。

21.6 NDB Cluster Replication

NDB Cluster supports asynchronous replication, more usually referred to simply as replication. This section explains how to set up and manage a configuration in which one group of computers operating as an NDB Cluster replicates to a second computer or group of computers. We assume some familiarity on the part of the reader with standard MySQL replication as discussed elsewhere in this Manual. (See Chapter 16, Replication).

ndb集群支持异步复制,通常简称为“复制”。本节说明如何设置和管理一个配置,其中作为ndb群集运行的一组计算机将复制到另一台计算机或一组计算机。我们假设读者对标准mysql复制有一定的了解,如本手册其他部分所述。(见第16章,复制)。

Note

NDB Cluster does not support replication using GTIDs; semisynchronous replication is also not supported by the NDB storage engine.

ndb群集不支持使用gtid的复制;ndb存储引擎也不支持半同步复制。

Normal (non-clustered) replication involves a master server and a slave server, the master being the source of the operations and data to be replicated and the slave being the recipient of these. In NDB Cluster, replication is conceptually very similar but can be more complex in practice, as it may be extended to cover a number of different configurations including replicating between two complete clusters. Although an NDB Cluster itself depends on the NDB storage engine for clustering functionality, it is not necessary to use NDB as the storage engine for the slave's copies of the replicated tables (see Replication from NDB to other storage engines). However, for maximum availability, it is possible (and preferable) to replicate from one NDB Cluster to another, and it is this scenario that we discuss, as shown in the following figure:

正常(非群集)复制包括“主”服务器和“从”服务器,主服务器是要复制的操作和数据的源,从服务器是这些操作和数据的接收者。在ndb集群中,复制在概念上非常相似,但在实践中可能更复杂,因为它可以扩展到包括在两个完整集群之间复制在内的许多不同配置。虽然ndb集群本身依赖于ndb存储引擎来实现集群功能,但不必使用ndb作为复制表的从机副本的存储引擎(请参阅从ndb到其他存储引擎的复制)。然而,为了获得最大的可用性,我们可以(最好)从一个集群到另一个集群进行复制,我们讨论的是这个场景,如下图所示:

Figure 21.42 NDB Cluster-to-Cluster Replication Layout

图21.42 ndb集群到集群复制布局

Much of the content is described in the surrounding text. It visualizes how a master MySQL server is replicated as a slave. The slave differs in that it shows an I/O thread pointing to a Relay Binlog, and that Relay Binlog pointing to an SQL thread. In addition, while the binlog points to and from the NdbCluster Engine on the master, on the slave diagram it points directly to the slave's MySQL server.

In this scenario, the replication process is one in which successive states of a master cluster are logged and saved to a slave cluster. This process is accomplished by a special thread known as the NDB binary log injector thread, which runs on each MySQL server and produces a binary log (binlog). This thread ensures that all changes in the cluster producing the binary log—and not just those changes that are effected through the MySQL Server—are inserted into the binary log with the correct serialization order. We refer to the MySQL replication master and replication slave servers as replication servers or replication nodes, and the data flow or line of communication between them as a replication channel.

在这种情况下,复制过程是主集群的连续状态被记录并保存到从集群的过程。这个过程由一个称为ndb binary log injector线程的特殊线程完成,该线程在每个mysql服务器上运行并生成一个binlog(binlog)。此线程确保以正确的序列化顺序将集群中生成二进制日志的所有更改(而不仅仅是通过mysql服务器影响的更改)插入到二进制日志中。我们将mysql复制主服务器和复制从服务器称为复制服务器或复制节点,它们之间的数据流或通信线路称为复制通道。

For information about performing point-in-time recovery with NDB Cluster and NDB Cluster Replication, see Section 21.6.9.2, “Point-In-Time Recovery Using NDB Cluster Replication”.

有关使用ndb群集和ndb群集复制执行时间点恢复的信息,请参阅21.6.9.2节“使用ndb群集复制的时间点恢复”。

NDB API _slave status variables.  NDB API counters can provide enhanced monitoring capabilities on NDB Cluster replication slaves. These are implemented as NDB statistics _slave status variables, as seen in the output of SHOW STATUS, or in the results of queries against the SESSION_STATUS or GLOBAL_STATUS table in a mysql client session connected to a MySQL Server that is acting as a slave in NDB Cluster Replication. By comparing the values of these status variables before and after the execution of statements affecting replicated NDB tables, you can observe the corresponding actions taken on the NDB API level by the slave, which can be useful when monitoring or troubleshooting NDB Cluster Replication. Section 21.5.17, “NDB API Statistics Counters and Variables”, provides additional information.

从机状态变量。ndb api计数器可以在ndb群集复制从属服务器上提供增强的监视功能。这些是作为ndb statistics从状态变量实现的,如show status的输出中所示,或者在连接到mysql服务器(在ndb集群复制中充当从服务器)的mysql客户端会话中查询session status或global status表的结果中所示。通过比较这些状态变量在执行影响复制的ndb表的语句之前和之后的值,可以观察从机在ndb api级别上执行的相应操作,这在监视或排除ndb群集复制时非常有用。第21.5.17节“ndb api统计计数器和变量”提供了附加信息。

Replication from NDB to non-NDB tables.  It is possible to replicate NDB tables from an NDB Cluster acting as the master to tables using other MySQL storage engines such as InnoDB or MyISAM on a slave mysqld. This is subject to a number of conditions; see Replication from NDB to other storage engines, and Replication from NDB to a nontransactional storage engine, for more information.

从ndb复制到非ndb表。可以使用其他mysql存储引擎(如innodb或myisam)将ndb表从作为主节点的ndb集群复制到从mysqld上的表。这取决于许多条件;有关详细信息,请参阅从ndb复制到其他存储引擎,以及从ndb复制到非事务存储引擎。

21.6.1 NDB Cluster Replication: Abbreviations and Symbols

Throughout this section, we use the following abbreviations or symbols for referring to the master and slave clusters, and to processes and commands run on the clusters or cluster nodes:

在本节中,我们使用以下缩写或符号来表示主集群和从集群,以及在集群或集群节点上运行的进程和命令:

Table 21.406 Abbreviations used throughout this section referring to master and slave clusters, and to processes and commands run on nodes

表21.406本节中使用的缩写,指主集群和从集群,以及在节点上运行的进程和命令

Symbol or Abbreviation Description (Refers to...)
M The cluster serving as the (primary) replication master
S The cluster acting as the (primary) replication slave
shellM> Shell command to be issued on the master cluster
mysqlM> MySQL client command issued on a single MySQL server running as an SQL node on the master cluster
mysqlM*> MySQL client command to be issued on all SQL nodes participating in the replication master cluster
shellS> Shell command to be issued on the slave cluster
mysqlS> MySQL client command issued on a single MySQL server running as an SQL node on the slave cluster
mysqlS*> MySQL client command to be issued on all SQL nodes participating in the replication slave cluster
C Primary replication channel
C' Secondary replication channel
M' Secondary replication master
S' Secondary replication slave

21.6.2 General Requirements for NDB Cluster Replication

A replication channel requires two MySQL servers acting as replication servers (one each for the master and slave). For example, this means that in the case of a replication setup with two replication channels (to provide an extra channel for redundancy), there will be a total of four replication nodes, two per cluster.

复制通道需要两个mysql服务器作为复制服务器(主服务器和从服务器各一个)。例如,这意味着如果复制设置有两个复制通道(为冗余提供额外通道),则总共将有四个复制节点,每个群集两个。

Replication of an NDB Cluster as described in this section and those following is dependent on row-based replication. This means that the replication master MySQL server must be running with --binlog-format=ROW or --binlog-format=MIXED, as described in Section 21.6.6, “Starting NDB Cluster Replication (Single Replication Channel)”. For general information about row-based replication, see Section 16.2.1, “Replication Formats”.

如本节所述的ndb集群的复制,以及以下内容依赖于基于行的复制。这意味着复制主MySQL服务器必须使用--binlog format=row或--binlog format=mixed运行,如21.6.6节“启动NDB群集复制(单个复制通道)”中所述。有关基于行的复制的一般信息,请参阅第16.2.1节“复制格式”。

Important

If you attempt to use NDB Cluster Replication with --binlog-format=STATEMENT, replication fails to work properly because the ndb_binlog_index table on the master and the epoch column of the ndb_apply_status table on the slave are not updated (see Section 21.6.4, “NDB Cluster Replication Schema and Tables”). Instead, only updates on the MySQL server acting as the replication master propagate to the slave, and no updates from any other SQL nodes on the master cluster are replicated.

如果尝试将ndb cluster replication与--binlog format=语句一起使用,复制将无法正常工作,因为主节点上的ndb binlog_索引表和从节点上ndb_apply_status表的epoch列未更新(请参阅第21.6.4节“ndb cluster replication schema and tables”)。相反,只有充当复制主服务器的mysql服务器上的更新才会传播到从服务器,并且不会复制来自主集群上任何其他sql节点的更新。

The default value for the --binlog-format option in NDB 7.5 is MIXED.

ndb 7.5中--binlog format选项的默认值是mixed。

Each MySQL server used for replication in either cluster must be uniquely identified among all the MySQL replication servers participating in either cluster (you cannot have replication servers on both the master and slave clusters sharing the same ID). This can be done by starting each SQL node using the --server-id=id option, where id is a unique integer. Although it is not strictly necessary, we will assume for purposes of this discussion that all NDB Cluster binaries are of the same release version.

在任一集群中用于复制的每个mysql服务器必须在参与任一集群的所有mysql复制服务器中唯一标识(主集群和从集群上的复制服务器不能共享同一id)。这可以通过使用--server id=id选项启动每个sql节点来完成,其中id是唯一的整数。尽管严格来说没有必要,但在本次讨论中,我们假设所有ndb集群二进制文件都是同一版本的。

It is generally true in MySQL Replication that both MySQL servers (mysqld processes) involved must be compatible with one another with respect to both the version of the replication protocol used and the SQL feature sets which they support (see Section 16.4.2, “Replication Compatibility Between MySQL Versions”). It is due to such differences between the binaries in the NDB Cluster and MySQL Server 5.7 distributions that NDB Cluster Replication has the additional requirement that both mysqld binaries come from an NDB Cluster distribution. The simplest and easiest way to assure that the mysqld servers are compatible is to use the same NDB Cluster distribution for all master and slave mysqld binaries.

在mysql复制中,涉及的两个mysql服务器(mysqld进程)必须在所使用的复制协议版本和它们支持的sql功能集方面彼此兼容(参见16.4.2节,“mysql版本之间的复制兼容性”)。正是由于ndb集群和mysql server 5.7发行版中的二进制文件之间的这种差异,ndb集群复制有一个额外的要求,即mysqld二进制文件都来自ndb集群发行版。确保mysqld服务器兼容的最简单和最简单的方法是对所有主mysqld和从mysqld二进制文件使用相同的ndb集群分发。

We assume that the slave server or cluster is dedicated to replication of the master, and that no other data is being stored on it.

我们假设从服务器或集群专用于主服务器的复制,并且没有其他数据存储在主服务器或集群上。

All NDB tables being replicated must be created using a MySQL server and client. Tables and other database objects created using the NDB API (with, for example, Dictionary::createTable()) are not visible to a MySQL server and so are not replicated. Updates by NDB API applications to existing tables that were created using a MySQL server can be replicated.

所有要复制的ndb表都必须使用mysql服务器和客户端创建。使用ndb api创建的表和其他数据库对象(例如dictionary::createTable())对mysql服务器不可见,因此不会复制。NDB API应用程序对使用MySQL服务器创建的现有表的更新可以被复制。

Note

It is possible to replicate an NDB Cluster using statement-based replication. However, in this case, the following restrictions apply:

可以使用基于语句的复制复制来复制ndb群集。但是,在这种情况下,适用以下限制:

  • All updates to data rows on the cluster acting as the master must be directed to a single MySQL server.

    对作为主服务器的集群上的数据行的所有更新都必须定向到单个mysql服务器。

  • It is not possible to replicate a cluster using multiple simultaneous MySQL replication processes.

    无法使用多个同步的MySQL复制进程复制群集。

  • Only changes made at the SQL level are replicated.

    只复制在SQL级别所做的更改。

These are in addition to the other limitations of statement-based replication as opposed to row-based replication; see Section 16.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based Replication”, for more specific information concerning the differences between the two replication formats.

这些是基于语句的复制相对于基于行的复制的其他限制之外的限制;有关两种复制格式之间的差异的更多具体信息,请参见第16.2.1.1节“基于语句和基于行的复制的优缺点”。

21.6.3 Known Issues in NDB Cluster Replication

This section discusses known problems or issues when using replication with NDB Cluster 7.5.

本节讨论使用ndb cluster 7.5复制时的已知问题。

Loss of master-slave connection.  A loss of connection can occur either between the replication master SQL node and the replication slave SQL node, or between the replication master SQL node and the data nodes in the master cluster. In the latter case, this can occur not only as a result of loss of physical connection (for example, a broken network cable), but due to the overflow of data node event buffers; if the SQL node is too slow to respond, it may be dropped by the cluster (this is controllable to some degree by adjusting the MaxBufferedEpochs and TimeBetweenEpochs configuration parameters). If this occurs, it is entirely possible for new data to be inserted into the master cluster without being recorded in the replication master's binary log. For this reason, to guarantee high availability, it is extremely important to maintain a backup replication channel, to monitor the primary channel, and to fail over to the secondary replication channel when necessary to keep the slave cluster synchronized with the master. NDB Cluster is not designed to perform such monitoring on its own; for this, an external application is required.

主从连接丢失。复制主SQL节点和复制从属SQL节点之间或复制主SQL节点和主群集中的数据节点之间可能会发生连接丢失。在后一种情况下,这不仅可能是由于物理连接丢失(例如,网络电缆断开),还可能是由于数据节点事件缓冲区溢出;如果SQL节点响应太慢,它可能会被集群丢弃(通过调整maxbufferedepochs和timebetweenepochs配置参数,这在某种程度上是可控的)。如果发生这种情况,则完全有可能将新数据插入主群集,而不将其记录在复制主群集的二进制日志中。因此,为了保证高可用性,维护备份复制通道、监视主通道以及在必要时故障转移到辅助复制通道以保持从集群与主集群同步是非常重要的。ndb集群不是设计用来自己执行这种监视的;为此,需要一个外部应用程序。

The replication master issues a gap event when connecting or reconnecting to the master cluster. (A gap event is a type of incident event, which indicates an incident that occurs that affects the contents of the database but that cannot easily be represented as a set of changes. Examples of incidents are server crashes, database resynchronization, (some) software updates, and (some) hardware changes.) When the slave encounters a gap in the replication log, it stops with an error message. This message is available in the output of SHOW SLAVE STATUS, and indicates that the SQL thread has stopped due to an incident registered in the replication stream, and that manual intervention is required. See Section 21.6.8, “Implementing Failover with NDB Cluster Replication”, for more information about what to do in such circumstances.

当连接或重新连接到主群集时,复制主群集发出“间隙”事件。(gap事件是“事件事件”的一种类型,它表示发生的事件影响了数据库的内容,但不能简单地表示为一组更改。事件的例子包括服务器崩溃、数据库重新同步、(一些)软件更新和(一些)硬件更改。)当从属服务器在复制日志中遇到间隙时,它会停止并显示错误消息。此消息在show slave status的输出中可用,并指示sql线程已因复制流中注册的事件而停止,并且需要手动干预。请参阅21.6.8节“使用ndb群集复制实现故障转移”,以了解有关在这种情况下如何操作的更多信息。

Important

Because NDB Cluster is not designed on its own to monitor replication status or provide failover, if high availability is a requirement for the slave server or cluster, then you must set up multiple replication lines, monitor the master mysqld on the primary replication line, and be prepared fail over to a secondary line if and as necessary. This must be done manually, or possibly by means of a third-party application. For information about implementing this type of setup, see Section 21.6.7, “Using Two Replication Channels for NDB Cluster Replication”, and Section 21.6.8, “Implementing Failover with NDB Cluster Replication”.

由于ndb集群本身并不是为了监视复制状态或提供故障转移而设计的,如果从服务器或集群需要高可用性,则必须设置多个复制行,监视主复制行上的主mysqld,并在必要时准备故障转移到辅助行。这必须手动完成,或者可能通过第三方应用程序完成。有关实现此类设置的信息,请参阅第21.6.7节“使用两个复制通道进行ndb群集复制”,以及第21.6.8节“使用ndb群集复制实现故障转移”。

However, if you are replicating from a standalone MySQL server to an NDB Cluster, one channel is usually sufficient.

但是,如果要从独立的mysql服务器复制到ndb集群,一个通道通常就足够了。

Circular replication.  NDB Cluster Replication supports circular replication, as shown in the next example. The replication setup involves three NDB Clusters numbered 1, 2, and 3, in which Cluster 1 acts as the replication master for Cluster 2, Cluster 2 acts as the master for Cluster 3, and Cluster 3 acts as the master for Cluster 1, thus completing the circle. Each NDB Cluster has two SQL nodes, with SQL nodes A and B belonging to Cluster 1, SQL nodes C and D belonging to Cluster 2, and SQL nodes E and F belonging to Cluster 3.

循环复制。ndb集群复制支持循环复制,如下例所示。复制设置包括编号为1、2和3的三个ndb群集,其中群集1充当群集2的复制主机,群集2充当群集3的主机,群集3充当群集1的主机,从而完成循环。每个ndb集群有两个sql节点,其中sql节点a和b属于集群1,sql节点c和d属于集群2,sql节点e和f属于集群3。

Circular replication using these clusters is supported as long as the following conditions are met:

只要满足以下条件,就支持使用这些群集的循环复制:

  • The SQL nodes on all masters and slaves are the same.

    所有主服务器和从服务器上的sql节点都是相同的。

  • All SQL nodes acting as replication masters and slaves are started with the log_slave_updates system variable enabled.

    所有充当复制主节点和从节点的SQL节点都在启用LOG U SLAVE U UPDATES系统变量的情况下启动。

This type of circular replication setup is shown in the following diagram:

这种循环复制设置如下图所示:

Figure 21.43 NDB Cluster Circular Replication With All Masters As Slaves

图21.43以所有主设备为从设备的ndb集群循环复制

Content is described in the surrounding text.

In this scenario, SQL node A in Cluster 1 replicates to SQL node C in Cluster 2; SQL node C replicates to SQL node E in Cluster 3; SQL node E replicates to SQL node A. In other words, the replication line (indicated by the curved arrows in the diagram) directly connects all SQL nodes used as replication masters and slaves.

在这种情况下,集群1中的SQL节点A复制到集群2中的SQL节点C;SQL节点C复制到集群3中的SQL节点E;SQL节点E复制到SQL节点A。换句话说,复制行(由图中的弧形箭头指示)直接连接用作复制主节点和从节点的所有SQL节点。

It should also be possible to set up circular replication in which not all master SQL nodes are also slaves, as shown here:

还可以设置循环复制,其中并非所有主SQL节点都是从节点,如下所示:

Figure 21.44 NDB Cluster Circular Replication Where Not All Masters Are Slaves

图21.44 ndb集群循环复制,其中并非所有主节点都是从节点

Some content is described in the surrounding text. It shows three clusters, each with two nodes. Arrows connect nodes from different clusters to represent that now all masters are slaves.

In this case, different SQL nodes in each cluster are used as replication masters and slaves. However, you must not start any of the SQL nodes with the log_slave_updates system variable enabled. This type of circular replication scheme for NDB Cluster, in which the line of replication (again indicated by the curved arrows in the diagram) is discontinuous, should be possible, but it should be noted that it has not yet been thoroughly tested and must therefore still be considered experimental.

在这种情况下,每个集群中的不同sql节点用作复制主节点和从节点。但是,不能在启用LOG U SLAVE U UPDATES系统变量的情况下启动任何SQL节点。这种用于ndb集群的循环复制方案应该是可能的,其中复制线(图中的曲线箭头再次表示)是不连续的,但是应该注意,它还没有经过彻底的测试,因此仍然必须被认为是实验性的。

Note

The NDB storage engine uses idempotent execution mode, which suppresses duplicate-key and other errors that otherwise break circular replication of NDB Cluster. This is equivalent to setting the global slave_exec_mode system variable to IDEMPOTENT, although this is not necessary in NDB Cluster replication, since NDB Cluster sets this variable automatically and ignores any attempts to set it explicitly.

ndb存储引擎使用等幂执行模式,该模式抑制重复密钥和其他错误,否则会中断ndb群集的循环复制。这相当于将全局slave_exec_mode系统变量设置为幂等,尽管在ndb集群复制中不需要这样做,因为ndb集群会自动设置此变量,并忽略显式设置此变量的任何尝试。

NDB Cluster replication and primary keys.  In the event of a node failure, errors in replication of NDB tables without primary keys can still occur, due to the possibility of duplicate rows being inserted in such cases. For this reason, it is highly recommended that all NDB tables being replicated have primary keys.

ndb群集复制和主键。在节点失败的情况下,由于在这种情况下可能插入重复的行,因此在复制没有主键的ndb表时仍可能发生错误。因此,强烈建议复制的所有ndb表都具有主键。

NDB Cluster Replication and Unique Keys.  In older versions of NDB Cluster, operations that updated values of unique key columns of NDB tables could result in duplicate-key errors when replicated. This issue is solved for replication between NDB tables by deferring unique key checks until after all table row updates have been performed.

ndb群集复制和唯一密钥。在旧版本的ndb集群中,更新ndb表的唯一键列值的操作在复制时可能会导致重复的键错误。对于ndb表之间的复制,此问题通过将唯一键检查推迟到执行完所有表行更新之后解决。

Deferring constraints in this way is currently supported only by NDB. Thus, updates of unique keys when replicating from NDB to a different storage engine such as MyISAM or InnoDB are still not supported.

以这种方式延迟约束目前仅由ndb支持。因此,当从ndb复制到不同的存储引擎(如myisam或innodb)时,仍然不支持更新唯一密钥。

The problem encountered when replicating without deferred checking of unique key updates can be illustrated using NDB table such as t, is created and populated on the master (and replicated to a slave that does not support deferred unique key updates) as shown here:

在不延迟检查唯一密钥更新的情况下进行复制时遇到的问题可以使用ndb表(如t)进行说明,该表是在主机(并复制到不支持延迟唯一密钥更新的从机)上创建和填充的,如下所示:

CREATE TABLE t (
    p INT PRIMARY KEY,
    c INT,
    UNIQUE KEY u (c)
)   ENGINE NDB;

INSERT INTO t
    VALUES (1,1), (2,2), (3,3), (4,4), (5,5);

The following UPDATE statement on t succeeded on the master, since the rows affected are processed in the order determined by the ORDER BY option, performed over the entire table:

由于受影响的行是按order by选项确定的顺序处理的,因此在主表上对t执行的以下update语句成功:

UPDATE t SET c = c - 1 ORDER BY p;

However, the same statement failed with a duplicate key error or other constraint violation on the slave, because the ordering of the row updates was done for one partition at a time, rather than for the table as a whole.

但是,由于行更新的顺序是一次为一个分区而不是整个表执行的,所以同一条语句在从机上由于重复键错误或其他约束冲突而失败。

Note

Every NDB table is implicitly partitioned by key when it is created. See Section 22.2.5, “KEY Partitioning”, for more information.

每个ndb表在创建时都是按键隐式分区的。有关更多信息,请参见第22.2.5节“密钥分区”。

GTIDs not supported.  Replication using global transaction IDs is not compatible with the NDB storage engine, and is not supported. Enabling GTIDs is likely to cause NDB Cluster Replication to fail.

不支持GTID。使用全局事务ID的复制与NDB存储引擎不兼容,因此不受支持。启用gtid可能会导致ndb群集复制失败。

Multithreaded slaves not supported.  NDB Cluster does not support multithreaded slaves, and setting related system variables such as slave_parallel_workers, slave_checkpoint_group, and slave_checkpoint_group (or the equivalent mysqld startup options) has no effect.

不支持多线程从机。ndb cluster不支持多线程从机,设置相关的系统变量,如slave_parallel_workers、slave_checkpoint_group和slave_checkpoint_group(或等效的mysqld启动选项)无效。

This is because the slave may not be able to separate transactions occurring in one database from those in another if they are written within the same epoch. In addition, every transaction handled by the NDB storage engine involves at least two databases—the target database and the mysql system database—due to the requirement for updating the mysql.ndb_apply_status table (see Section 21.6.4, “NDB Cluster Replication Schema and Tables”). This in turn breaks the requirement for multithreading that the transaction is specific to a given database.

这是因为,如果一个数据库中发生的事务与另一个数据库中发生的事务是在同一时间段内写入的,则从数据库可能无法将它们分开。此外,由于需要更新mysql.ndb_apply_status表(请参阅第21.6.4节“ndb cluster replication schema and tables”),ndb存储引擎处理的每个事务至少涉及两个数据库:目标数据库和mysql系统数据库。这反过来又打破了多线程的要求,即事务特定于给定的数据库。

Restarting with --initial.  Restarting the cluster with the --initial option causes the sequence of GCI and epoch numbers to start over from 0. (This is generally true of NDB Cluster and not limited to replication scenarios involving Cluster.) The MySQL servers involved in replication should in this case be restarted. After this, you should use the RESET MASTER and RESET SLAVE statements to clear the invalid ndb_binlog_index and ndb_apply_status tables, respectively.

重新启动时使用--initial。使用--initial选项重新启动集群将导致gci和epoch编号的序列从0重新开始。(这通常适用于ndb集群,不限于涉及集群的复制场景。)在这种情况下,应重新启动涉及复制的mysql服务器。之后,您应该使用reset master和reset slave语句分别清除无效的ndb_binlog_index和ndb_apply_status表。

Replication from NDB to other storage engines.  It is possible to replicate an NDB table on the master to a table using a different storage engine on the slave, taking into account the restrictions listed here:

从ndb复制到其他存储引擎。考虑到下面列出的限制,可以使用从机上的不同存储引擎将主机上的ndb表复制到表中:

  • Multi-master and circular replication are not supported (tables on both the master and the slave must use the NDB storage engine for this to work).

    不支持多主复制和循环复制(主复制和从复制上的表都必须使用ndb存储引擎才能正常工作)。

  • Using a storage engine which does not perform binary logging for slave tables requires special handling.

    使用不为从表执行二进制日志记录的存储引擎需要特殊处理。

  • Use of a nontransactional storage engine for slave tables also requires special handling.

    对从表使用非事务性存储引擎也需要特殊处理。

  • The master mysqld must be started with --ndb-log-update-as-write=0 or --ndb-log-update-as-write=OFF.

    主mysqld必须以--ndb log update as write=0或--ndb log update as write=off启动。

The next few paragraphs provide additional information about each of the issues just described.

接下来的几段提供了关于刚才描述的每个问题的附加信息。

Multiple masters not supported when replicating NDB to other storage engines.  For replication from NDB to a different storage engine, the relationship between the two databases must be a simple master-slave one. This means that circular or master-master replication is not supported between NDB Cluster and other storage engines.

将ndb复制到其他存储引擎时不支持多个主机。要从ndb复制到不同的存储引擎,两个数据库之间的关系必须是简单的主从关系。这意味着在ndb群集和其他存储引擎之间不支持循环或主控复制。

In addition, it is not possible to configure more than one replication channel when replicating between NDB and a different storage engine. (However, an NDB Cluster database can simultaneously replicate to multiple slave NDB Cluster databases.) If the master uses NDB tables, it is still possible to have more than one MySQL Server maintain a binary log of all changes; however, for the slave to change masters (fail over), the new master-slave relationship must be explicitly defined on the slave.

此外,在ndb和其他存储引擎之间进行复制时,不可能配置多个复制通道。(但是,一个ndb集群数据库可以同时复制到多个从ndb集群数据库。)如果主数据库使用ndb表,那么仍然可以让多个mysql服务器维护所有更改的二进制日志;但是,对于从数据库更改主数据库(故障转移)。必须在从机上显式定义新的主从关系。

Replicating NDB to a slave storage engine that does not perform binary logging.  If you attempt to replicate from an NDB Cluster to a slave that uses a storage engine that does not handle its own binary logging, the replication process aborts with the error Binary logging not possible ... Statement cannot be written atomically since more than one engine involved and at least one engine is self-logging (Error 1595). It is possible to work around this issue in one of the following ways:

将ndb复制到不执行二进制日志记录的从属存储引擎。如果尝试从ndb群集复制到使用不处理其自身二进制日志记录的存储引擎的从属服务器,则复制过程将中止,并出现错误“二进制日志记录不可能”…由于涉及多个引擎且至少有一个引擎是自记录的,因此无法以原子方式编写语句(错误1595)。可以通过以下方式之一解决此问题:

  • Turn off binary logging on the slave.  This can be accomplished by setting sql_log_bin = 0.

    关闭从机上的二进制日志记录。这可以通过设置sql_log_bin=0来实现。

  • Change the storage engine used for the mysql.ndb_apply_status table.  Causing this table to use an engine that does not handle its own binary logging can also eliminate the conflict. This can be done by issuing a statement such as ALTER TABLE mysql.ndb_apply_status ENGINE=MyISAM on the slave. It is safe to do this when using a non-NDB storage engine on the slave, since you do not then need to worry about keeping multiple slave SQL nodes synchronized.

    更改用于mysql.ndb_apply_状态表的存储引擎。使此表使用不处理自身二进制日志记录的引擎也可以消除冲突。这可以通过在从机上发出alter table mysql.ndb_apply_status engine=myisam语句来完成。在从机上使用非ndb存储引擎时,这样做是安全的,因为这样就不需要担心多个从机sql节点保持同步。

  • Filter out changes to the mysql.ndb_apply_status table on the slave.  This can be done by starting the slave SQL node with --replicate-ignore-table=mysql.ndb_apply_status. If you need for other tables to be ignored by replication, you might wish to use an appropriate --replicate-wild-ignore-table option instead.

    过滤掉对从机上mysql.ndb_apply_status表的更改。这可以通过使用--replicate ignore table=mysql.ndb_apply_status启动从SQL节点来完成。如果需要复制忽略其他表,则可以使用适当的--replicate wild ignore table选项。

Important

You should not disable replication or binary logging of mysql.ndb_apply_status or change the storage engine used for this table when replicating from one NDB Cluster to another. See Replication and binary log filtering rules with replication between NDB Clusters, for details.

从一个ndb群集复制到另一个ndb群集时,不应禁用mysql.ndb_apply_状态的复制或二进制日志记录,也不应更改用于此表的存储引擎。有关详细信息,请参阅使用ndb群集之间的复制的复制和二进制日志筛选规则。

Replication from NDB to a nontransactional storage engine.  When replicating from NDB to a nontransactional storage engine such as MyISAM, you may encounter unnecessary duplicate key errors when replicating INSERT ... ON DUPLICATE KEY UPDATE statements. You can suppress these by using --ndb-log-update-as-write=0, which forces updates to be logged as writes (rather than as updates).

从ndb复制到非事务存储引擎。当从ndb复制到非事务存储引擎(如myisam)时,在复制insert时可能会遇到不必要的重复密钥错误…在重复的密钥更新语句中。您可以使用--ndb log update as write=0来禁止这些操作,它强制将更新记录为写入(而不是更新)。

Replication and binary log filtering rules with replication between NDB Clusters.  If you are using any of the options --replicate-do-*, --replicate-ignore-*, --binlog-do-db, or --binlog-ignore-db to filter databases or tables being replicated, care must be taken not to block replication or binary logging of the mysql.ndb_apply_status, which is required for replication between NDB Clusters to operate properly. In particular, you must keep in mind the following:

在ndb集群之间复制和二进制日志筛选规则。如果使用任何选项--replicate do-*、--replicate ignore-*、--binlog do db或--binlog ignore db来筛选要复制的数据库或表,则必须注意不要阻止mysql的复制或二进制日志记录。ndb_apply_status是ndb群集之间复制正常运行所必需的。特别是,你必须记住以下几点:

  1. Using --replicate-do-db=db_name (and no other --replicate-do-* or --replicate-ignore-* options) means that only tables in database db_name are replicated. In this case, you should also use --replicate-do-db=mysql, --binlog-do-db=mysql, or --replicate-do-table=mysql.ndb_apply_status to ensure that mysql.ndb_apply_status is populated on slaves.

    使用--replicate do db=db_name(没有其他--replicate do-*或--replicate ignore-*选项)意味着只复制数据库db_name中的表。在这种情况下,还应该使用--replicate do db=mysql,--binlog do db=mysql,或--replicate do table=mysql.ndb_apply_status来确保mysql.ndb_apply_status在从机上填充。

    Using --binlog-do-db=db_name (and no other --binlog-do-db options) means that changes only to tables in database db_name are written to the binary log. In this case, you should also use --replicate-do-db=mysql, --binlog-do-db=mysql, or --replicate-do-table=mysql.ndb_apply_status to ensure that mysql.ndb_apply_status is populated on slaves.

    使用--binlog do db=db戋name(没有其他--binlog do db选项)意味着只将对数据库db戋name中表的更改写入二进制日志。在这种情况下,还应该使用--replicate do db=mysql,--binlog do db=mysql,或--replicate do table=mysql.ndb_apply_status来确保mysql.ndb_apply_status在从机上填充。

  2. Using --replicate-ignore-db=mysql means that no tables in the mysql database are replicated. In this case, you should also use --replicate-do-table=mysql.ndb_apply_status to ensure that mysql.ndb_apply_status is replicated.

    使用--replicate ignore db=mysql意味着mysql数据库中没有表被复制。在这种情况下,还应该使用--replicate do table=mysql.ndb_apply_status来确保mysql.ndb_apply_status被复制。

    Using --binlog-ignore-db=mysql means that no changes to tables in the mysql database are written to the binary log. In this case, you should also use --replicate-do-table=mysql.ndb_apply_status to ensure that mysql.ndb_apply_status is replicated.

    使用--binlog ignore db=mysql意味着不会将对mysql数据库中表的更改写入二进制日志。在这种情况下,还应该使用--replicate do table=mysql.ndb_apply_status来确保mysql.ndb_apply_status被复制。

You should also remember that each replication rule requires the following:

还应记住,每个复制规则都需要以下内容:

  1. Its own --replicate-do-* or --replicate-ignore-* option, and that multiple rules cannot be expressed in a single replication filtering option. For information about these rules, see Section 16.1.6, “Replication and Binary Logging Options and Variables”.

    它自己的--replicate do-*或--replicate ignore-*选项,多个规则不能在单个复制筛选选项中表示。有关这些规则的信息,请参阅第16.1.6节“复制和二进制日志选项和变量”。

  2. Its own --binlog-do-db or --binlog-ignore-db option, and that multiple rules cannot be expressed in a single binary log filtering option. For information about these rules, see Section 5.4.4, “The Binary Log”.

    它自己的--binlog do db或--binlog ignore db选项,多个规则不能用一个二进制日志过滤选项来表示。有关这些规则的信息,请参见第5.4.4节“二进制日志”。

If you are replicating an NDB Cluster to a slave that uses a storage engine other than NDB, the considerations just given previously may not apply, as discussed elsewhere in this section.

如果要将ndb集群复制到使用ndb以外的存储引擎的从机,则前面给出的考虑可能不适用,如本节其他部分所述。

NDB Cluster Replication and IPv6.  Currently, the NDB API and MGM API do not support IPv6. However, MySQL Servers—including those acting as SQL nodes in an NDB Cluster —can use IPv6 to contact other MySQL Servers. This means that you can replicate between NDB Clusters using IPv6 to connect the master and slave SQL nodes as shown by the dotted arrow in the following diagram:

ndb群集复制和ipv6。目前,ndb api和mgm api不支持ipv6。但是,mysql服务器(包括在ndb集群中充当sql节点的服务器)可以使用ipv6来联系其他mysql服务器。这意味着您可以使用IPv6在NDB群集之间进行复制,以连接主和从SQL节点,如下图中的虚线箭头所示:

Figure 21.45 Replication Between SQL Nodes Connected Using IPv6

图21.45使用IPv6连接的SQL节点之间的复制

Most content is described in the surrounding text. The dotted line representing a MySQL-to-MySQL IPv6 connection is between two nodes, one each from master and slave clusters. All connections within the cluster, such as ndbd-to-ndbd, node to ndb_mgmd, are connected with solid lines to indicate IPv4 connections only.

However, all connections originating within the NDB Cluster —represented in the preceding diagram by solid arrows—must use IPv4. In other words, all NDB Cluster data nodes, management servers, and management clients must be accessible from one another using IPv4. In addition, SQL nodes must use IPv4 to communicate with the cluster.

但是,所有源于ndb集群内的连接(在上图中用实心箭头表示)都必须使用ipv4。换言之,所有ndb群集数据节点、管理服务器和管理客户端必须使用ipv4相互访问。此外,SQL节点必须使用IPv4与群集通信。

Since there is currently no support in the NDB and MGM APIs for IPv6, any applications written using these APIs must also make all connections using IPv4.

由于目前ndb和mgmapi不支持ipv6,因此使用这些api编写的任何应用程序也必须使用ipv4建立所有连接。

Attribute promotion and demotion.  NDB Cluster Replication includes support for attribute promotion and demotion. The implementation of the latter distinguishes between lossy and non-lossy type conversions, and their use on the slave can be controlled by setting the slave_type_conversions global server system variable.

属性提升和降级。ndb群集复制包括对属性提升和降级的支持。后者的实现区分有损和无损类型转换,并且可以通过设置slave_type_conversions全局服务器系统变量来控制它们在从机上的使用。

For more information about attribute promotion and demotion in NDB Cluster, see Row-based replication: attribute promotion and demotion.

有关ndb集群中属性提升和降级的详细信息,请参阅基于行的复制:属性提升和降级。

NDB, unlike InnoDB or MyISAM, does not write changes to virtual columns to the binary log; however, this has no detrimental effects on NDB Cluster Replication or replication between NDB and other storage engines. Changes to stored generated columns are logged.

ndb与innodb或myisam不同,它不会将对虚拟列的更改写入二进制日志;但是,这不会对ndb集群复制或ndb与其他存储引擎之间的复制产生有害影响。记录对存储的生成列的更改。

21.6.4 NDB Cluster Replication Schema and Tables

Replication in NDB Cluster makes use of a number of dedicated tables in the mysql database on each MySQL Server instance acting as an SQL node in both the cluster being replicated and the replication slave (whether the slave is a single server or a cluster). These tables are created during the MySQL installation process, and include a table for storing the binary log's indexing data. Since the ndb_binlog_index table is local to each MySQL server and does not participate in clustering, it uses the InnoDB storage engine. This means that it must be created separately on each mysqld participating in the master cluster. (However, the binary log itself contains updates from all MySQL servers in the cluster to be replicated.) This table is defined as follows:

ndb集群中的复制利用每个mysql服务器实例上mysql数据库中的多个专用表作为被复制集群和复制从机(无论从机是单服务器还是集群)中的sql节点。这些表是在MySQL安装过程中创建的,并包含一个用于存储二进制日志索引数据的表。由于ndb_binlog_索引表是每个mysql服务器本地的,并且不参与集群,所以它使用了innodb存储引擎。这意味着它必须在参与主集群的每个mysqld上分别创建。(但是,二进制日志本身包含要复制的群集中所有mysql服务器的更新。)此表定义如下:

CREATE TABLE `ndb_binlog_index` (
    `Position` BIGINT(20) UNSIGNED NOT NULL,
    `File` VARCHAR(255) NOT NULL,
    `epoch` BIGINT(20) UNSIGNED NOT NULL,
    `inserts` INT(10) UNSIGNED NOT NULL,
    `updates` INT(10) UNSIGNED NOT NULL,
    `deletes` INT(10) UNSIGNED NOT NULL,
    `schemaops` INT(10) UNSIGNED NOT NULL,
    `orig_server_id` INT(10) UNSIGNED NOT NULL,
    `orig_epoch` BIGINT(20) UNSIGNED NOT NULL,
    `gci` INT(10) UNSIGNED NOT NULL,
    `next_position` bigint(20) unsigned NOT NULL,
    `next_file` varchar(255) NOT NULL,
    PRIMARY KEY (`epoch`,`orig_server_id`,`orig_epoch`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Note

Prior to NDB 7.5.2, this table always used the MyISAM storage engine. If you are upgrading from an earlier release, you can use mysql_upgrade with the --force and --upgrade-system-tables options to cause it to execute an ALTER TABLE ... ENGINE=INNODB statement on this table. Use of the MyISAM storage engine for this table continues to be supported in NDB 7.5.2 and later for backward compatibility.

在ndb 7.5.2之前,此表始终使用myisam存储引擎。如果要从早期版本升级,可以使用mysql_upgrade和--force和--upgrade系统表选项使其执行alter表…此表上的ENGINE=INNODB语句。为了向后兼容,在ndb 7.5.2和更高版本中继续支持对该表使用myisam存储引擎。

ndb_binlog_index may require additional disk space after being converted to InnoDB. If this becomes an issue, you may be able to conserve space by using an InnoDB tablespace for this table, changing its ROW_FORMAT to COMPRESSED, or both. For more information, see Section 13.1.19, “CREATE TABLESPACE Syntax”, and Section 13.1.18, “CREATE TABLE Syntax”, as well as Section 14.6.3, “Tablespaces”.

ndb_binlog_索引在转换为innodb后可能需要额外的磁盘空间。如果这成为一个问题,您可以为这个表使用innodb表空间,将其row_格式更改为compressed,或者同时使用这两种格式来节省空间。有关详细信息,请参阅第13.1.19节“创建表空间语法”和第13.1.18节“创建表语法”,以及第14.6.3节“表空间”。

The size of this table is dependent on the number of epochs per binary log file and the number of binary log files. The number of epochs per binary log file normally depends on the amount of binary log generated per epoch and the size of the binary log file, with smaller epochs resulting in more epochs per file. You should be aware that empty epochs produce inserts to the ndb_binlog_index table, even when the --ndb-log-empty-epochs option is OFF, meaning that the number of entries per file depends on the length of time that the file is in use; that is,

此表的大小取决于每个二进制日志文件的epoch数和二进制日志文件数。每个二进制日志文件的epoch数量通常取决于每个epoch生成的二进制日志的数量和二进制日志文件的大小,较小的epoch会导致每个文件有更多的epoch。您应该知道,即使--ndb log empty epochs选项处于关闭状态,空的epochs也会生成对ndb binlog_索引表的插入,这意味着每个文件的条目数取决于文件正在使用的时间长度;也就是说,

[number of epochs per file] = [time spent per file] / TimeBetweenEpochs

A busy NDB Cluster writes to the binary log regularly and presumably rotates binary log files more quickly than a quiet one. This means that a quiet NDB Cluster with --ndb-log-empty-epochs=ON can actually have a much higher number of ndb_binlog_index rows per file than one with a great deal of activity.

繁忙的ndb集群定期写入二进制日志,并且可能比安静的集群更快地旋转二进制日志文件。这意味着具有--ndb log empty epochs=on的“安静”ndb集群实际上每个文件的ndb-binlog-u索引行数比具有大量活动的索引行数多得多。

When mysqld is started with the --ndb-log-orig option, the orig_server_id and orig_epoch columns store, respectively, the ID of the server on which the event originated and the epoch in which the event took place on the originating server, which is useful in NDB Cluster replication setups employing multiple masters. The SELECT statement used to find the closest binary log position to the highest applied epoch on the slave in a multi-master setup (see Section 21.6.10, “NDB Cluster Replication: Multi-Master and Circular Replication”) employs these two columns, which are not indexed. This can lead to performance issues when trying to fail over, since the query must perform a table scan, especially when the master has been running with --ndb-log-empty-epochs=ON. You can improve multi-master failover times by adding an index to these columns, as shown here:

当mysqld使用--ndb log orig选项启动时,orig_server_id和orig_epoch列分别存储事件发起服务器的id和事件发生在发起服务器上的epoch,这在使用多个主服务器的ndb群集复制设置中很有用。select语句用于在多主机设置中查找离从机上应用的最高纪元最近的二进制日志位置(请参阅21.6.10节,“ndb群集复制:多主机和循环复制”)使用这两列,这两列没有索引。这可能会导致在尝试故障转移时出现性能问题,因为查询必须执行表扫描,特别是当主机运行时--ndb log empty epochs=on。通过向这些列添加索引,可以提高多主故障转移时间,如下所示:

ALTER TABLE mysql.ndb_binlog_index
    ADD INDEX orig_lookup USING BTREE (orig_server_id, orig_epoch);

Adding this index provides no benefit when replicating from a single master to a single slave, since the query used to get the binary log position in such cases makes no use of orig_server_id or orig_epoch.

添加此索引在从一个主服务器复制到一个从服务器时没有任何好处,因为在这种情况下,用于获取二进制日志位置的查询不使用orig_server_id或orig_epoch。

See Section 21.6.8, “Implementing Failover with NDB Cluster Replication”, for more information about using the next_position and next_file columns.

有关使用下一个位置和下一个文件列的详细信息,请参阅第21.6.8节“使用ndb群集复制实现故障转移”。

The following figure shows the relationship of the NDB Cluster replication master server, its binary log injector thread, and the mysql.ndb_binlog_index table.

下图显示了ndb cluster replication master server、其二进制日志注入程序线程和mysql.ndb_binlog_索引表之间的关系。

Figure 21.46 The Replication Master Cluster

图21.46复制主集群

Most concepts are described in the surrounding text. This complex image has three main areas. The top area is divided into three sections: MySQL Server (mysqld), NdbCluster table handler, and mutex. A connection thread connects these three areas, and receiver and injector threads connect NdbCluster table handler and mutex. The bottom area lists four data nodes (ndbd). They all have events arrows pointing to the receiver thread, and the receiver thread also points to the connection and injector threads. One node sends and receives to the mutex area. The injector thread points to a binlog and also the third area in this image: the ndb_binlog_index table, a table described in the surrounding text.

An additional table, named ndb_apply_status, is used to keep a record of the operations that have been replicated from the master to the slave. Unlike the case with ndb_binlog_index, the data in this table is not specific to any one SQL node in the (slave) cluster, and so ndb_apply_status can use the NDBCLUSTER storage engine, as shown here:

另一个名为ndb_apply_status的表用于记录从主设备复制到从设备的操作。与ndb_binlog_index的情况不同,此表中的数据不特定于(从)集群中的任何一个sql节点,因此ndb_apply_status可以使用ndb cluster存储引擎,如下所示:

CREATE TABLE `ndb_apply_status` (
    `server_id`   INT(10) UNSIGNED NOT NULL,
    `epoch`       BIGINT(20) UNSIGNED NOT NULL,
    `log_name`    VARCHAR(255) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
    `start_pos`   BIGINT(20) UNSIGNED NOT NULL,
    `end_pos`     BIGINT(20) UNSIGNED NOT NULL,
    PRIMARY KEY (`server_id`) USING HASH
) ENGINE=NDBCLUSTER   DEFAULT CHARSET=latin1;

The ndb_apply_status table is populated only on slaves, which means that, on the master, this table never contains any rows; thus, there is no need to allow for DataMemory or IndexMemory to be allotted to ndb_apply_status there.

ndb_apply_status表只在从机上填充,这意味着,在主机上,此表从不包含任何行;因此,不需要允许将数据内存或indexmemory分配给那里的ndb_apply_status。

Because this table is populated from data originating on the master, it should be allowed to replicate; any replication filtering or binary log filtering rules that inadvertently prevent the slave from updating ndb_apply_status or the master from writing into the binary log may prevent replication between clusters from operating properly. For more information about potential problems arising from such filtering rules, see Replication and binary log filtering rules with replication between NDB Clusters.

由于此表是从源于主服务器上的数据填充的,因此应允许复制;任何无意中阻止从服务器更新ndb_apply_状态或主服务器写入二进制日志的复制筛选或二进制日志筛选规则都可能阻止群集之间的复制正常运行。有关此类筛选规则引起的潜在问题的详细信息,请参阅使用ndb群集之间的复制的复制和二进制日志筛选规则。

The ndb_binlog_index and ndb_apply_status tables are created in the mysql database because they should not be explicitly replicated by the user. User intervention is normally not required to create or maintain either of these tables, since both ndb_binlog_index and the ndb_apply_status are maintained by the NDB binary log (binlog) injector thread. This keeps the master mysqld process updated to changes performed by the NDB storage engine. The NDB binlog injector thread receives events directly from the NDB storage engine. The NDB injector is responsible for capturing all the data events within the cluster, and ensures that all events which change, insert, or delete data are recorded in the ndb_binlog_index table. The slave I/O thread transfers the events from the master's binary log to the slave's relay log.

ndb_binlog_index和ndb_apply_status表是在mysql数据库中创建的,因为用户不应该显式地复制它们。创建或维护这两个表通常不需要用户干预,因为ndb binlog_index和ndb_apply_status都由ndb binary log(binlog)注入程序线程维护。这将使主mysqld进程根据ndb存储引擎执行的更改进行更新。ndb binlog注入器线程直接从ndb存储引擎接收事件。ndb注入器负责捕获集群中的所有数据事件,并确保更改、插入或删除数据的所有事件都记录在ndb-binlog-u索引表中。从I/O线程将事件从主服务器的二进制日志传输到从服务器的中继日志。

However, it is advisable to check for the existence and integrity of these tables as an initial step in preparing an NDB Cluster for replication. It is possible to view event data recorded in the binary log by querying the mysql.ndb_binlog_index table directly on the master. This can be also be accomplished using the SHOW BINLOG EVENTS statement on either the replication master or slave MySQL servers. (See Section 13.7.5.2, “SHOW BINLOG EVENTS Syntax”.)

但是,最好检查这些表的存在性和完整性,作为准备NDB集群进行复制的初始步骤。通过直接在master上查询mysql.ndb_binlog_索引表,可以查看记录在二进制日志中的事件数据。这也可以在复制主或从mysql服务器上使用show binlog events语句来完成。(见第13.7.5.2节,“显示binlog事件语法”。)

You can also obtain useful information from the output of SHOW ENGINE NDB STATUS.

您还可以从show engine ndb status的输出中获得有用的信息。

Note

When performing schema changes on NDB tables, applications should wait until the ALTER TABLE statement has returned in the MySQL client connection that issued the statement before attempting to use the updated definition of the table.

在对ndb表执行架构更改时,应用程序应等到发出该语句的mysql客户端连接中返回alter table语句后,才能尝试使用表的更新定义。

If the ndb_apply_status table does not exist on the slave, ndb_restore re-creates it.

如果在主从上不存在NDbApAPyyStand表,NdByRebug将重新创建它。

Conflict resolution for NDB Cluster Replication requires the presence of an additional mysql.ndb_replication table. Currently, this table must be created manually. For information about how to do this, see Section 21.6.11, “NDB Cluster Replication Conflict Resolution”.

解决ndb群集复制的冲突需要额外的mysql.ndb_复制表。当前,必须手动创建此表。有关如何执行此操作的信息,请参阅第21.6.11节“NDB群集复制冲突解决”。

21.6.5 Preparing the NDB Cluster for Replication

Preparing the NDB Cluster for replication consists of the following steps:

准备用于复制的ndb群集包括以下步骤:

  1. Check all MySQL servers for version compatibility (see Section 21.6.2, “General Requirements for NDB Cluster Replication”).

    检查所有mysql服务器的版本兼容性(请参阅21.6.2节,“ndb群集复制的一般要求”)。

  2. Create a slave account on the master Cluster with the appropriate privileges:

    在主群集上创建具有适当权限的从属帐户:

    mysqlM> GRANT REPLICATION SLAVE
         -> ON *.* TO 'slave_user'@'slave_host'
         -> IDENTIFIED BY 'slave_password';
    

    In the previous statement, slave_user is the slave account user name, slave_host is the host name or IP address of the replication slave, and slave_password is the password to assign to this account.

    在前面的语句中,slave_user是从机帐户用户名,slave_host是复制从机的主机名或IP地址,slave_password是要分配给此帐户的密码。

    For example, to create a slave user account with the name myslave, logging in from the host named rep-slave, and using the password 53cr37, use the following GRANT statement:

    例如,要创建名为myslave的从属用户帐户,从名为rep slave的主机登录,并使用密码53cr37,请使用以下GRANT语句:

    mysqlM> GRANT REPLICATION SLAVE
         -> ON *.* TO 'myslave'@'rep-slave'
         -> IDENTIFIED BY '53cr37';
    

    For security reasons, it is preferable to use a unique user account—not employed for any other purpose—for the replication slave account.

    出于安全原因,最好使用不用于复制从属帐户的任何其他用途的唯一用户帐户。

  3. Configure the slave to use the master. Using the MySQL Monitor, this can be accomplished with the CHANGE MASTER TO statement:

    将从机配置为使用主机。使用mysql监视器,这可以通过change master to语句来完成:

    mysqlS> CHANGE MASTER TO
         -> MASTER_HOST='master_host',
         -> MASTER_PORT=master_port,
         -> MASTER_USER='slave_user',
         -> MASTER_PASSWORD='slave_password';
    

    In the previous statement, master_host is the host name or IP address of the replication master, master_port is the port for the slave to use for connecting to the master, slave_user is the user name set up for the slave on the master, and slave_password is the password set for that user account in the previous step.

    在前面的语句中,master_host是复制主机的主机名或IP地址,master_port是从机用来连接到主机的端口,slave_user是在主机上为从机设置的用户名,slave_password是在前面的步骤中为该用户帐户设置的密码。

    For example, to tell the slave to replicate from the MySQL server whose host name is rep-master, using the replication slave account created in the previous step, use the following statement:

    例如,要告诉从服务器从其主机名为rep master的mysql服务器进行复制,请使用在上一步中创建的复制从服务器帐户,使用以下语句:

    mysqlS> CHANGE MASTER TO
         -> MASTER_HOST='rep-master',
         -> MASTER_PORT=3306,
         -> MASTER_USER='myslave',
         -> MASTER_PASSWORD='53cr37';
    

    For a complete list of options that can be used with this statement, see Section 13.4.2.1, “CHANGE MASTER TO Syntax”.

    有关可与此语句一起使用的选项的完整列表,请参阅13.4.2.1节,“将master更改为syntax”。

    To provide replication backup capability, you also need to add an --ndb-connectstring option to the slave's my.cnf file prior to starting the replication process. See Section 21.6.9, “NDB Cluster Backups With NDB Cluster Replication”, for details.

    要提供复制备份功能,还需要在启动复制过程之前将--ndb connectstring选项添加到从机的my.cnf文件中。有关详细信息,请参阅第21.6.9节“使用ndb群集复制的ndb群集备份”。

    For additional options that can be set in my.cnf for replication slaves, see Section 16.1.6, “Replication and Binary Logging Options and Variables”.

    有关可以在my.cnf中为复制从属服务器设置的其他选项,请参阅16.1.6节“复制和二进制日志记录选项和变量”。

  4. If the master cluster is already in use, you can create a backup of the master and load this onto the slave to cut down on the amount of time required for the slave to synchronize itself with the master. If the slave is also running NDB Cluster, this can be accomplished using the backup and restore procedure described in Section 21.6.9, “NDB Cluster Backups With NDB Cluster Replication”.

    如果主集群已经在使用中,您可以创建主集群的备份并将其加载到从集群上,以减少从集群与主集群同步所需的时间。如果从机也运行ndb cluster,则可以使用21.6.9节“ndb cluster backups with ndb cluster replication”中描述的备份和还原过程来完成此操作。

    ndb-connectstring=management_host[:port]
    

    In the event that you are not using NDB Cluster on the replication slave, you can create a backup with this command on the replication master:

    如果不在复制从机上使用ndb cluster,则可以在复制主机上使用以下命令创建备份:

    shellM> mysqldump --master-data=1
    

    Then import the resulting data dump onto the slave by copying the dump file over to the slave. After this, you can use the mysql client to import the data from the dumpfile into the slave database as shown here, where dump_file is the name of the file that was generated using mysqldump on the master, and db_name is the name of the database to be replicated:

    然后通过将转储文件复制到从机,将生成的数据转储导入到从机。在此之后,您可以使用mysql客户机将数据从转储文件导入到从属数据库,如下所示,其中dump_file是在主数据库上使用mysqldump生成的文件的名称,db_name是要复制的数据库的名称:

    shellS> mysql -u root -p db_name < dump_file
    

    For a complete list of options to use with mysqldump, see Section 4.5.4, “mysqldump — A Database Backup Program”.

    有关与mysqldump一起使用的选项的完整列表,请参阅4.5.4节“mysqldump-数据库备份程序”。

    Note

    If you copy the data to the slave in this fashion, you should make sure that the slave is started with the --skip-slave-start option on the command line, or else include skip-slave-start in the slave's my.cnf file to keep it from trying to connect to the master to begin replicating before all the data has been loaded. Once the data loading has completed, follow the additional steps outlined in the next two sections.

    如果以这种方式将数据复制到从机,则应确保从机是使用命令行上的--skip slave start选项启动的,或者在从机的my.cnf文件中包含skip slave start,以防止它在加载所有数据之前尝试连接到主机以开始复制。数据加载完成后,请按照下面两节中概述的其他步骤进行操作。

  5. Ensure that each MySQL server acting as a replication master is configured with a unique server ID, and with binary logging enabled, using the row format. (See Section 16.2.1, “Replication Formats”.) These options can be set either in the master server's my.cnf file, or on the command line when starting the master mysqld process. See Section 21.6.6, “Starting NDB Cluster Replication (Single Replication Channel)”, for information regarding the latter option.

    确保每个充当复制主服务器的mysql服务器都配置了唯一的服务器id,并使用行格式启用了二进制日志记录。(请参阅16.2.1节,“复制格式”。)这些选项可以在主服务器的my.cnf文件中设置,也可以在启动主mysqld进程时在命令行中设置。有关后一个选项的信息,请参阅21.6.6节,“启动ndb群集复制(单个复制通道)”。

21.6.6 Starting NDB Cluster Replication (Single Replication Channel)

This section outlines the procedure for starting NDB Cluster replication using a single replication channel.

本节概述了使用单个复制通道启动ndb群集复制的过程。

  1. Start the MySQL replication master server by issuing this command:

    通过发出以下命令启动mysql复制主服务器:

    shellM> mysqld --ndbcluster --server-id=id \
            --log-bin &
    

    In the previous statement, id is this server's unique ID (see Section 21.6.2, “General Requirements for NDB Cluster Replication”). This starts the server's mysqld process with binary logging enabled using the proper logging format.

    在前面的语句中,id是此服务器的唯一id(请参阅21.6.2节,“ndb群集复制的一般要求”)。这将启动服务器的mysqld进程,并使用正确的日志格式启用二进制日志记录。

    Note

    You can also start the master with --binlog-format=MIXED, in which case row-based replication is used automatically when replicating between clusters. STATEMENT based binary logging is not supported for NDB Cluster Replication (see Section 21.6.2, “General Requirements for NDB Cluster Replication”).

    您还可以使用--binlog format=mixed启动master,在这种情况下,在集群之间进行复制时将自动使用基于行的复制。ndb群集复制不支持基于语句的二进制日志记录(请参阅21.6.2节,“ndb群集复制的一般要求”)。

  2. Start the MySQL replication slave server as shown here:

    启动mysql复制从属服务器,如下所示:

    shellS> mysqld --ndbcluster --server-id=id &
    

    In the command just shown, id is the slave server's unique ID. It is not necessary to enable logging on the replication slave.

    在刚才显示的命令中,id是从服务器的唯一id。不必启用复制从服务器上的日志记录。

    Note

    You should use the --skip-slave-start option with this command or else you should include skip-slave-start in the slave server's my.cnf file, unless you want replication to begin immediately. With the use of this option, the start of replication is delayed until the appropriate START SLAVE statement has been issued, as explained in Step 4 below.

    您应该在该命令中使用--skip slave start选项,否则应该在从属服务器的my.cnf文件中包含skip slave start,除非您希望立即开始复制。使用此选项时,复制的开始会延迟,直到发出相应的start slave语句,如下面的步骤4所述。

  3. It is necessary to synchronize the slave server with the master server's replication binary log. If binary logging has not previously been running on the master, run the following statement on the slave:

    必须将从属服务器与主服务器的复制二进制日志同步。如果以前没有在主服务器上运行二进制日志记录,请在从服务器上运行以下语句:

    mysqlS> CHANGE MASTER TO
         -> MASTER_LOG_FILE='',
         -> MASTER_LOG_POS=4;
    

    This instructs the slave to begin reading the master's binary log from the log's starting point. Otherwise—that is, if you are loading data from the master using a backup—see Section 21.6.8, “Implementing Failover with NDB Cluster Replication”, for information on how to obtain the correct values to use for MASTER_LOG_FILE and MASTER_LOG_POS in such cases.

    这指示从服务器从日志的起点开始读取主服务器的二进制日志。否则,如果要使用备份从主机加载数据,请参阅第21.6.8节“使用NDB群集复制实现故障转移”,了解如何获取在这种情况下用于主机日志文件和主机日志位置的正确值。

  4. Finally, you must instruct the slave to begin applying replication by issuing this command from the mysql client on the replication slave:

    最后,必须指示从机开始应用复制,方法是从复制从机上的mysql客户机发出以下命令:

    mysqlS> START SLAVE;
    

    This also initiates the transmission of replication data from the master to the slave.

    这也会启动从主机到从机的复制数据传输。

It is also possible to use two replication channels, in a manner similar to the procedure described in the next section; the differences between this and using a single replication channel are covered in Section 21.6.7, “Using Two Replication Channels for NDB Cluster Replication”.

也可以使用两个复制通道,其方式类似于下一节中描述的过程;这与使用单个复制通道之间的区别在第21.6.7节“使用两个复制通道进行ndb群集复制”中介绍。

It is also possible to improve cluster replication performance by enabling batched updates. This can be accomplished by setting the slave_allow_batching system variable on the slave mysqld processes. Normally, updates are applied as soon as they are received. However, the use of batching causes updates to be applied in 32 KB batches, which can result in higher throughput and less CPU usage, particularly where individual updates are relatively small.

还可以通过启用批处理更新来提高群集复制性能。这可以通过在从mysqld进程上设置slave_allow_batching系统变量来实现。通常,更新一收到就应用。但是,使用批处理会导致在32 KB的批处理中应用更新,这会导致更高的吞吐量和更少的CPU使用率,特别是在单个更新相对较小的情况下。

Note

Slave batching works on a per-epoch basis; updates belonging to more than one transaction can be sent as part of the same batch.

从批处理以每个epoch为基础工作;属于多个事务的更新可以作为同一批的一部分发送。

All outstanding updates are applied when the end of an epoch is reached, even if the updates total less than 32 KB.

所有未完成的更新都将在到达纪元结尾时应用,即使更新总数小于32 KB。

Batching can be turned on and off at runtime. To activate it at runtime, you can use either of these two statements:

批处理可以在运行时打开和关闭。要在运行时激活它,可以使用以下两个语句之一:

SET GLOBAL slave_allow_batching = 1;
SET GLOBAL slave_allow_batching = ON;

If a particular batch causes problems (such as a statement whose effects do not appear to be replicated correctly), slave batching can be deactivated using either of the following statements:

如果特定批处理导致问题(例如效果似乎未正确复制的语句),则可以使用以下任一语句停用从属批处理:

SET GLOBAL slave_allow_batching = 0;
SET GLOBAL slave_allow_batching = OFF;

You can check whether slave batching is currently being used by means of an appropriate SHOW VARIABLES statement, like this one:

您可以通过适当的show variables语句检查从属批处理当前是否正在使用,如下所示:

mysql> SHOW VARIABLES LIKE 'slave%';
+---------------------------+-------+
| Variable_name             | Value |
+---------------------------+-------+
| slave_allow_batching      | ON    |
| slave_compressed_protocol | OFF   |
| slave_load_tmpdir         | /tmp  |
| slave_net_timeout         | 3600  |
| slave_skip_errors         | OFF   |
| slave_transaction_retries | 10    |
+---------------------------+-------+
6 rows in set (0.00 sec)

21.6.7 Using Two Replication Channels for NDB Cluster Replication

In a more complete example scenario, we envision two replication channels to provide redundancy and thereby guard against possible failure of a single replication channel. This requires a total of four replication servers, two masters for the master cluster and two slave servers for the slave cluster. For purposes of the discussion that follows, we assume that unique identifiers are assigned as shown here:

在更完整的示例场景中,我们设想使用两个复制通道来提供冗余,从而防止单个复制通道可能出现故障。这总共需要四个复制服务器,两个主服务器用于主群集,两个从服务器用于从群集。在下面的讨论中,我们假设如下所示分配了唯一标识符:

Table 21.407 NDB Cluster replication servers described in the text

文中描述的表21.407 ndb集群复制服务器

Server ID Description
1 Master - primary replication channel (M)
2 Master - secondary replication channel (M')
3 Slave - primary replication channel (S)
4 Slave - secondary replication channel (S')

Setting up replication with two channels is not radically different from setting up a single replication channel. First, the mysqld processes for the primary and secondary replication masters must be started, followed by those for the primary and secondary slaves. Then the replication processes may be initiated by issuing the START SLAVE statement on each of the slaves. The commands and the order in which they need to be issued are shown here:

使用两个通道设置复制与设置单个复制通道并无本质区别。首先,必须启动主复制主机和辅助复制主机的mysqld进程,然后启动主复制主机和辅助复制主机的mysqld进程。然后,可以通过在每个从机上发出start slave语句来启动复制过程。命令及其发出顺序如下所示:

  1. Start the primary replication master:

    启动主复制主机:

    shellM> mysqld --ndbcluster --server-id=1 \
                   --log-bin &
    
  2. Start the secondary replication master:

    启动辅助复制主机:

    shellM'> mysqld --ndbcluster --server-id=2 \
                   --log-bin &
    
  3. Start the primary replication slave server:

    启动主复制从属服务器:

    shellS> mysqld --ndbcluster --server-id=3 \
                   --skip-slave-start &
    
  4. Start the secondary replication slave:

    启动辅助复制从属服务器:

    shellS'> mysqld --ndbcluster --server-id=4 \
                    --skip-slave-start &
    
  5. Finally, initiate replication on the primary channel by executing the START SLAVE statement on the primary slave as shown here:

    最后,通过在主从机上执行start slave语句来启动主通道上的复制,如下所示:

    mysqlS> START SLAVE;
    
    Warning

    Only the primary channel is to be started at this point. The secondary replication channel is to be started only in the event that the primary replication channel fails, as described in Section 21.6.8, “Implementing Failover with NDB Cluster Replication”. Running multiple replication channels simultaneously can result in unwanted duplicate records being created on the replication slaves.

    此时只能启动主通道。只有当主复制通道发生故障时,才会启动辅助复制通道,如21.6.8节“使用NDB群集复制实现故障转移”中所述。同时运行多个复制通道可能会导致在复制从属服务器上创建不需要的重复记录。

As mentioned previously, it is not necessary to enable binary logging on replication slaves.

如前所述,不必在复制从机上启用二进制日志记录。

21.6.8 Implementing Failover with NDB Cluster Replication

In the event that the primary Cluster replication process fails, it is possible to switch over to the secondary replication channel. The following procedure describes the steps required to accomplish this.

如果主群集复制进程失败,则可以切换到辅助复制通道。以下过程描述了完成此操作所需的步骤。

  1. Obtain the time of the most recent global checkpoint (GCP). That is, you need to determine the most recent epoch from the ndb_apply_status table on the slave cluster, which can be found using the following query:

    获取最近全局检查点(GCP)的时间。也就是说,您需要从从集群上的ndb_apply_status表中确定最新的epoch,该表可以使用以下查询找到:

    mysqlS'> SELECT @latest:=MAX(epoch)
          ->        FROM mysql.ndb_apply_status;
    

    In a circular replication topology, with a master and a slave running on each host, when you are using ndb_log_apply_status=1, NDB Cluster epochs are written in the slave binary logs. This means that the ndb_apply_status table contains information for the slave on this host as well as for any other host which acts as a slave of the master running on this host.

    在循环复制拓扑中,在每个主机上运行一个主节点和一个从节点的情况下,当您使用ndb_log_apply_status=1时,ndb集群时间段将写入从节点的二进制日志中。这意味着ndb_apply_status表包含此主机上的从机以及作为此主机上运行的主机的从机的任何其他主机的信息。

    In this case, you need to determine the latest epoch on this slave to the exclusion of any epochs from any other slaves in this slave's binary log that were not listed in the IGNORE_SERVER_IDS options of the CHANGE MASTER TO statement used to set up this slave. The reason for excluding such epochs is that rows in the mysql.ndb_apply_status table whose server IDs have a match in the IGNORE_SERVER_IDS list used with the CHANGE MASTER TO statement used to prepare this slave's master are also considered to be from local servers, in addition to those having the slave's own server ID. You can retrieve this list as Replicate_Ignore_Server_Ids from the output of SHOW SLAVE STATUS. We assume that you have obtained this list and are substituting it for ignore_server_ids in the query shown here, which like the previous version of the query, selects the greatest epoch into a variable named @latest:

    在这种情况下,您需要确定此从机上的最新纪元,以排除此从机二进制日志中未在用于设置此从机的change master to语句的ignore_server_ids选项中列出的任何其他从机的纪元。排除此类时间段的原因是mysql.ndb_apply_status表中的行(其服务器ID与用于准备此从服务器主服务器的change master to语句的ignore_server_ids列表中的服务器ID匹配)也被视为来自本地服务器,除了具有从机自己的服务器ID之外,您还可以从ShowSlaveStatus的输出中以replicate_ignore_server_id的形式检索此列表。我们假设您已获得此列表,并将其替换为此处所示查询中的ignore_server_id,该查询与以前版本的查询一样,将最大的epoch选择为名为@latest:

    mysqlS'> SELECT @latest:=MAX(epoch)
          ->        FROM mysql.ndb_apply_status
          ->        WHERE server_id NOT IN (ignore_server_ids);
    

    In some cases, it may be simpler or more efficient (or both) to use a list of the server IDs to be included and server_id IN server_id_list in the WHERE condition of the preceding query.

    在某些情况下,在前面查询的where条件中,使用要包括的服务器id和服务器id列表中的服务器id的列表可能更简单或更有效(或两者兼有)。

  2. Using the information obtained from the query shown in Step 1, obtain the corresponding records from the ndb_binlog_index table on the master cluster.

    使用从步骤1所示的查询中获得的信息,从主集群上的ndb-binlog-u索引表中获取相应的记录。

    You can use the following query to obtain the needed records from the master's ndb_binlog_index table:

    您可以使用以下查询从主机的ndb binlog_索引表中获取所需的记录:

    mysqlM'> SELECT
          ->     @file:=SUBSTRING_INDEX(next_file, '/', -1),
          ->     @pos:=next_position
          -> FROM mysql.ndb_binlog_index
          -> WHERE epoch >= @latest
          -> ORDER BY epoch ASC LIMIT 1;
    

    These are the records saved on the master since the failure of the primary replication channel. We have employed a user variable @latest here to represent the value obtained in Step 1. Of course, it is not possible for one mysqld instance to access user variables set on another server instance directly. These values must be plugged in to the second query manually or in application code.

    这些是自主复制通道失败后保存在主复制通道上的记录。我们在这里使用了一个用户变量@latest来表示在步骤1中获得的值。当然,一个mysqld实例不可能直接访问另一个服务器实例上设置的用户变量。这些值必须手动“插入”到第二个查询或应用程序代码中。

    Important

    You must ensure that the slave mysqld is started with --slave-skip-errors=ddl_exist_errors before executing START SLAVE. Otherwise, replication may stop with duplicate DDL errors.

    必须确保在执行开始奴隶之前,从从属跳过错误= DDLySistar错误开始从属MySQL。否则,复制可能会因重复的ddl错误而停止。

  3. Now it is possible to synchronize the secondary channel by running the following query on the secondary slave server:

    现在可以通过在辅助从属服务器上运行以下查询来同步辅助通道:

    mysqlS'> CHANGE MASTER TO
          ->     MASTER_LOG_FILE='@file',
          ->     MASTER_LOG_POS=@pos;
    

    Again we have employed user variables (in this case @file and @pos) to represent the values obtained in Step 2 and applied in Step 3; in practice these values must be inserted manually or using application code that can access both of the servers involved.

    我们再次使用用户变量(在本例中是@file和@pos)来表示在步骤2中获得并在步骤3中应用的值;实际上,必须手动插入这些值,或者使用可以访问两个相关服务器的应用程序代码。

    Note

    @file is a string value such as '/var/log/mysql/replication-master-bin.00001', and so must be quoted when used in SQL or application code. However, the value represented by @pos must not be quoted. Although MySQL normally attempts to convert strings to numbers, this case is an exception.

    @文件是一个字符串值,如'/var/log/mysql/replication master bin.00001',因此在SQL或应用程序代码中使用时必须引用。但是,不能引用@pos表示的值。尽管mysql通常尝试将字符串转换为数字,但这种情况是个例外。

  4. You can now initiate replication on the secondary channel by issuing the appropriate command on the secondary slave mysqld:

    现在,您可以通过在辅助从属mysqld上发出相应的命令来启动辅助通道上的复制:

    mysqlS'> START SLAVE;
    

Once the secondary replication channel is active, you can investigate the failure of the primary and effect repairs. The precise actions required to do this will depend upon the reasons for which the primary channel failed.

一旦辅助复制通道处于活动状态,就可以调查主复制通道的故障并进行修复。执行此操作所需的精确操作将取决于主通道失败的原因。

Warning

The secondary replication channel is to be started only if and when the primary replication channel has failed. Running multiple replication channels simultaneously can result in unwanted duplicate records being created on the replication slaves.

只有当主复制通道失败时,才启动辅助复制通道。同时运行多个复制通道可能会导致在复制从属服务器上创建不需要的重复记录。

If the failure is limited to a single server, it should (in theory) be possible to replicate from M to S', or from M' to S; however, this has not yet been tested.

如果故障仅限于一台服务器,那么(理论上)应该可以从m复制到s,或者从m复制到s;但是,这还没有经过测试。

21.6.9 NDB Cluster Backups With NDB Cluster Replication

This section discusses making backups and restoring from them using NDB Cluster replication. We assume that the replication servers have already been configured as covered previously (see Section 21.6.5, “Preparing the NDB Cluster for Replication”, and the sections immediately following). This having been done, the procedure for making a backup and then restoring from it is as follows:

本节讨论使用ndb群集复制进行备份和从中还原。我们假设复制服务器已经按照前面所述进行了配置(请参阅21.6.5节,“准备用于复制的ndb集群”,以及紧接着的章节)。完成此操作后,进行备份并从中还原的过程如下:

  1. There are two different methods by which the backup may be started.

    有两种不同的方法可以启动备份。

    • Method A.  This method requires that the cluster backup process was previously enabled on the master server, prior to starting the replication process. This can be done by including the following line in a [mysql_cluster] section in the my.cnf file, where management_host is the IP address or host name of the NDB management server for the master cluster, and port is the management server's port number:

      方法A。此方法要求在启动复制过程之前,已在主服务器上启用群集备份过程。这可以通过在my.cnf文件的[mysql_cluster]部分包含以下行来完成,其中management_host是主群集的ndb管理服务器的IP地址或主机名,port是管理服务器的端口号:

      ndb-connectstring=management_host[:port]
      
      Note

      The port number needs to be specified only if the default port (1186) is not being used. See Section 21.2.5, “Initial Configuration of NDB Cluster”, for more information about ports and port allocation in NDB Cluster.

      仅当未使用默认端口(1186)时,才需要指定端口号。有关ndb集群中端口和端口分配的更多信息,请参阅第21.2.5节“ndb集群的初始配置”。

      In this case, the backup can be started by executing this statement on the replication master:

      在这种情况下,可以通过在复制主机上执行以下语句来启动备份:

      shellM> ndb_mgm -e "START BACKUP"
      
    • Method B.  If the my.cnf file does not specify where to find the management host, you can start the backup process by passing this information to the NDB management client as part of the START BACKUP command. This can be done as shown here, where management_host and port are the host name and port number of the management server:

      方法B。如果my.cnf文件未指定在何处查找管理主机,则可以通过将此信息作为start backup命令的一部分传递给ndb管理客户端来启动备份过程。如图所示,其中管理主机和端口是管理服务器的主机名和端口号:

      shellM> ndb_mgm management_host:port -e "START BACKUP"
      

      In our scenario as outlined earlier (see Section 21.6.5, “Preparing the NDB Cluster for Replication”), this would be executed as follows:

      在我们前面概述的场景中(参见第21.6.5节“准备ndb集群进行复制”),将按如下方式执行:

      shellM> ndb_mgm rep-master:1186 -e "START BACKUP"
      
  2. Copy the cluster backup files to the slave that is being brought on line. Each system running an ndbd process for the master cluster will have cluster backup files located on it, and all of these files must be copied to the slave to ensure a successful restore. The backup files can be copied into any directory on the computer where the slave management host resides, so long as the MySQL and NDB binaries have read permissions in that directory. In this case, we will assume that these files have been copied into the directory /var/BACKUPS/BACKUP-1.

    将群集备份文件复制到正在联机的从属服务器。为主群集运行ndbd进程的每个系统上都有群集备份文件,必须将所有这些文件复制到从群集以确保成功还原。备份文件可以复制到从属管理主机所在计算机上的任何目录中,只要mysql和ndb二进制文件在该目录中具有读取权限。在本例中,我们假设这些文件已复制到/var/backups/backup-1目录中。

    It is not necessary that the slave cluster have the same number of ndbd processes (data nodes) as the master; however, it is highly recommended this number be the same. It is necessary that the slave be started with the --skip-slave-start option, to prevent premature startup of the replication process.

    从集群没有必要拥有与主集群相同数量的ndbd进程(数据节点);但是,强烈建议该数量相同。必须使用--skip slave start选项启动从机,以防止复制进程过早启动。

  3. Create any databases on the slave cluster that are present on the master cluster that are to be replicated to the slave.

    在要复制到从属群集的主群集上的从属群集上创建任何数据库。

    Important

    A CREATE DATABASE (or CREATE SCHEMA) statement corresponding to each database to be replicated must be executed on each SQL node in the slave cluster.

    必须在从属集群中的每个sql节点上执行与要复制的每个数据库对应的create database(或create schema)语句。

  4. Reset the slave cluster using this statement in the MySQL Monitor:

    使用mysql监视器中的以下语句重置从群集:

    mysqlS> RESET SLAVE;
    
  5. You can now start the cluster restoration process on the replication slave using the ndb_restore command for each backup file in turn. For the first of these, it is necessary to include the -m option to restore the cluster metadata:

    现在,您可以依次对每个备份文件使用ndb_restore命令在复制从机上启动群集还原过程。首先,需要包含-m选项来还原群集元数据:

    shellS> ndb_restore -c slave_host:port -n node-id \
            -b backup-id -m -r dir
    

    dir is the path to the directory where the backup files have been placed on the replication slave. For the ndb_restore commands corresponding to the remaining backup files, the -m option should not be used.

    dir是备份文件放在复制从机上的目录的路径。对于与其余备份文件相对应的ndb_restore命令,不应使用-m选项。

    For restoring from a master cluster with four data nodes (as shown in the figure in Section 21.6, “NDB Cluster Replication”) where the backup files have been copied to the directory /var/BACKUPS/BACKUP-1, the proper sequence of commands to be executed on the slave might look like this:

    对于从具有四个数据节点的主集群(如21.6节“ndb cluster replication”中的图所示)进行恢复,其中备份文件已复制到/var/backups/backup-1目录,在从集群上执行的正确命令序列可能如下所示:

    shellS> ndb_restore -c rep-slave:1186 -n 2 -b 1 -m \
            -r ./var/BACKUPS/BACKUP-1
    shellS> ndb_restore -c rep-slave:1186 -n 3 -b 1 \
            -r ./var/BACKUPS/BACKUP-1
    shellS> ndb_restore -c rep-slave:1186 -n 4 -b 1 \
            -r ./var/BACKUPS/BACKUP-1
    shellS> ndb_restore -c rep-slave:1186 -n 5 -b 1 -e \
            -r ./var/BACKUPS/BACKUP-1
    
    Important

    The -e (or --restore-epoch) option in the final invocation of ndb_restore in this example is required in order that the epoch is written to the slave mysql.ndb_apply_status. Without this information, the slave will not be able to synchronize properly with the master. (See Section 21.4.24, “ndb_restore — Restore an NDB Cluster Backup”.)

    在本例中,为了将epoch写入从mysql.ndb_apply_状态,需要在最后调用ndb_restore时使用-e(或--restore epoch)选项。如果没有这些信息,从机将无法与主机正确同步。(请参阅21.4.24节,“ndb_restore-restore an ndb cluster backup”。)

  6. Now you need to obtain the most recent epoch from the ndb_apply_status table on the slave (as discussed in Section 21.6.8, “Implementing Failover with NDB Cluster Replication”):

    现在,您需要从从设备上的ndb apply状态表中获取最新的epoch(如第21.6.8节“使用ndb群集复制实现故障转移”中所述):

    mysqlS> SELECT @latest:=MAX(epoch)
            FROM mysql.ndb_apply_status;
    
  7. Using @latest as the epoch value obtained in the previous step, you can obtain the correct starting position @pos in the correct binary log file @file from the master's mysql.ndb_binlog_index table using the query shown here:

    使用@latest作为上一步获得的epoch值,您可以使用以下查询从master's mysql.ndb_binlog_索引表的正确二进制日志文件@file中获得正确的起始位置@pos:

    mysqlM> SELECT
         ->     @file:=SUBSTRING_INDEX(File, '/', -1),
         ->     @pos:=Position
         -> FROM mysql.ndb_binlog_index
         -> WHERE epoch >= @latest
         -> ORDER BY epoch ASC LIMIT 1;
    

    In the event that there is currently no replication traffic, you can get this information by running SHOW MASTER STATUS on the master and using the value in the Position column for the file whose name has the suffix with the greatest value for all files shown in the File column. However, in this case, you must determine this and supply it in the next step manually or by parsing the output with a script.

    如果当前没有复制通信量,则可以通过在主服务器上运行show master status并使用文件名后缀为file列中显示的所有文件的最大值的文件的position列中的值来获取此信息。但是,在这种情况下,您必须确定并在下一步中手动或通过使用脚本解析输出来提供它。

  8. Using the values obtained in the previous step, you can now issue the appropriate CHANGE MASTER TO statement in the slave's mysql client:

    使用上一步获得的值,现在可以在从属mysql客户机中发出相应的change master to语句:

    mysqlS> CHANGE MASTER TO
         ->     MASTER_LOG_FILE='@file',
         ->     MASTER_LOG_POS=@pos;
    
  9. Now that the slave knows from what point in which binary log file to start reading data from the master, you can cause the slave to begin replicating with this standard MySQL statement:

    现在,从服务器“知道”从哪个点开始从主服务器读取数据的二进制日志文件,您可以使用以下标准mysql语句使从服务器开始复制:

    mysqlS> START SLAVE;
    

To perform a backup and restore on a second replication channel, it is necessary only to repeat these steps, substituting the host names and IDs of the secondary master and slave for those of the primary master and slave replication servers where appropriate, and running the preceding statements on them.

要在第二个复制通道上执行备份和还原,只需重复这些步骤,在适当的情况下用辅助主服务器和从服务器的主机名和ID替换主主服务器和从服务器的主机名和ID,并在它们上运行前面的语句。

For additional information on performing Cluster backups and restoring Cluster from backups, see Section 21.5.3, “Online Backup of NDB Cluster”.

有关执行群集备份和从备份还原群集的其他信息,请参阅21.5.3节“ndb群集的联机备份”。

21.6.9.1 NDB Cluster Replication: Automating Synchronization of the Replication Slave to the Master Binary Log

It is possible to automate much of the process described in the previous section (see Section 21.6.9, “NDB Cluster Backups With NDB Cluster Replication”). The following Perl script reset-slave.pl serves as an example of how you can do this.

可以自动执行上一节中描述的大部分过程(请参阅21.6.9节,“使用ndb群集复制的ndb群集备份”)。下面的perl脚本reset-slave.pl就是一个例子,说明如何做到这一点。

#!/user/bin/perl -w

#  file: reset-slave.pl

#  Copyright ©2005 MySQL AB

#  This program is free software; you can redistribute it and/or modify
#  it under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.

#  This program is distributed in the hope that it will be useful,
#  but WITHOUT ANY WARRANTY; without even the implied warranty of
#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#  GNU General Public License for more details.

#  You should have received a copy of the GNU General Public License
#  along with this program; if not, write to:
#  Free Software Foundation, Inc.
#  59 Temple Place, Suite 330
#  Boston, MA 02111-1307 USA
#
#  Version 1.1


######################## Includes ###############################

use DBI;

######################## Globals ################################

my  $m_host='';
my  $m_port='';
my  $m_user='';
my  $m_pass='';
my  $s_host='';
my  $s_port='';
my  $s_user='';
my  $s_pass='';
my  $dbhM='';
my  $dbhS='';

####################### Sub Prototypes ##########################

sub CollectCommandPromptInfo;
sub ConnectToDatabases;
sub DisconnectFromDatabases;
sub GetSlaveEpoch;
sub GetMasterInfo;
sub UpdateSlave;

######################## Program Main ###########################

CollectCommandPromptInfo;
ConnectToDatabases;
GetSlaveEpoch;
GetMasterInfo;
UpdateSlave;
DisconnectFromDatabases;

################## Collect Command Prompt Info ##################

sub CollectCommandPromptInfo
{
  ### Check that user has supplied correct number of command line args
  die "Usage:\n
       reset-slave >master MySQL host< >master MySQL port< \n
                   >master user< >master pass< >slave MySQL host< \n
                   >slave MySQL port< >slave user< >slave pass< \n
       All 8 arguments must be passed. Use BLANK for NULL passwords\n"
       unless @ARGV == 8;

  $m_host  =  $ARGV[0];
  $m_port  =  $ARGV[1];
  $m_user  =  $ARGV[2];
  $m_pass  =  $ARGV[3];
  $s_host  =  $ARGV[4];
  $s_port  =  $ARGV[5];
  $s_user  =  $ARGV[6];
  $s_pass  =  $ARGV[7];

  if ($m_pass eq "BLANK") { $m_pass = '';}
  if ($s_pass eq "BLANK") { $s_pass = '';}
}

###############  Make connections to both databases #############

sub ConnectToDatabases
{
  ### Connect to both master and slave cluster databases

  ### Connect to master
  $dbhM
    = DBI->connect(
    "dbi:mysql:database=mysql;host=$m_host;port=$m_port",
    "$m_user", "$m_pass")
      or die "Can't connect to Master Cluster MySQL process!
              Error: $DBI::errstr\n";

  ### Connect to slave
  $dbhS
    = DBI->connect(
          "dbi:mysql:database=mysql;host=$s_host",
          "$s_user", "$s_pass")
    or die "Can't connect to Slave Cluster MySQL process!
            Error: $DBI::errstr\n";
}

################  Disconnect from both databases ################

sub DisconnectFromDatabases
{
  ### Disconnect from master

  $dbhM->disconnect
  or warn " Disconnection failed: $DBI::errstr\n";

  ### Disconnect from slave

  $dbhS->disconnect
  or warn " Disconnection failed: $DBI::errstr\n";
}

######################  Find the last good GCI ##################

sub GetSlaveEpoch
{
  $sth = $dbhS->prepare("SELECT MAX(epoch)
                         FROM mysql.ndb_apply_status;")
      or die "Error while preparing to select epoch from slave: ",
             $dbhS->errstr;

  $sth->execute
      or die "Selecting epoch from slave error: ", $sth->errstr;

  $sth->bind_col (1, \$epoch);
  $sth->fetch;
  print "\tSlave Epoch =  $epoch\n";
  $sth->finish;
}

#######  Find the position of the last GCI in the binary log ########

sub GetMasterInfo
{
  $sth = $dbhM->prepare("SELECT
                           SUBSTRING_INDEX(File, '/', -1), Position
                         FROM mysql.ndb_binlog_index
                         WHERE epoch > $epoch
                         ORDER BY epoch ASC LIMIT 1;")
      or die "Prepare to select from master error: ", $dbhM->errstr;

  $sth->execute
      or die "Selecting from master error: ", $sth->errstr;

  $sth->bind_col (1, \$binlog);
  $sth->bind_col (2, \$binpos);
  $sth->fetch;
  print "\tMaster binary log =  $binlog\n";
  print "\tMaster binary log position =  $binpos\n";
  $sth->finish;
}

##########  Set the slave to process from that location #########

sub UpdateSlave
{
  $sth = $dbhS->prepare("CHANGE MASTER TO
                         MASTER_LOG_FILE='$binlog',
                         MASTER_LOG_POS=$binpos;")
      or die "Prepare to CHANGE MASTER error: ", $dbhS->errstr;

  $sth->execute
       or die "CHANGE MASTER on slave error: ", $sth->errstr;
  $sth->finish;
  print "\tSlave has been updated. You may now start the slave.\n";
}

# end reset-slave.pl

21.6.9.2 Point-In-Time Recovery Using NDB Cluster Replication

Point-in-time recovery—that is, recovery of data changes made since a given point in time—is performed after restoring a full backup that returns the server to its state when the backup was made. Performing point-in-time recovery of NDB Cluster tables with NDB Cluster and NDB Cluster Replication can be accomplished using a native NDB data backup (taken by issuing CREATE BACKUP in the ndb_mgm client) and restoring the ndb_binlog_index table (from a dump made using mysqldump).

时间点恢复(point-in-time recovery):在还原完整备份(将服务器恢复到备份时的状态)之后,恢复自给定时间点以来所做的数据更改。使用ndb cluster和ndb cluster复制执行ndb集群表的时间点恢复可以使用本机ndb数据备份(通过在ndb-mgm客户端中发出create backup)和还原ndb-binlog-u索引表(从使用mysqldump生成的转储中)来完成。

To perform point-in-time recovery of NDB Cluster, it is necessary to follow the steps shown here:

要执行ndb群集的时间点恢复,必须执行以下步骤:

  1. Back up all NDB databases in the cluster, using the START BACKUP command in the ndb_mgm client (see Section 21.5.3, “Online Backup of NDB Cluster”).

    使用ndb-mgm客户端中的start back up命令备份群集中的所有ndb数据库(请参阅第21.5.3节“ndb群集的联机备份”)。

  2. At some later point, prior to restoring the cluster, make a backup of the mysql.ndb_binlog_index table. It is probably simplest to use mysqldump for this task. Also back up the binary log files at this time.

    稍后,在还原集群之前,备份mysql.ndb_binlog_索引表。使用mysqldump执行此任务可能最简单。同时备份二进制日志文件。

    This backup should be updated regularly—perhaps even hourly—depending on your needs.

    这个备份应该定期更新,甚至根据您的需要每小时更新一次。

  3. (Catastrophic failure or error occurs.)

    (发生灾难性故障或错误。)

  4. Locate the last known good backup.

    找到最后一个已知的好备份。

  5. Clear the data node file systems (using ndbd --initial or ndbmtd --initial).

    清除数据节点文件系统(使用ndbd--initial或ndbmtd--initial)。

    Note

    NDB Cluster Disk Data tablespace and log files are not removed by --initial. You must delete these manually.

    ndb集群磁盘数据表空间和日志文件不会被--initial删除。必须手动删除这些。

  6. Use DROP TABLE or TRUNCATE TABLE with the mysql.ndb_binlog_index table.

    对mysql.ndb_binlog_索引表使用drop table或truncate table。

  7. Execute ndb_restore, restoring all data. You must include the --restore-epoch option when you run ndb_restore, so that the ndb_apply_status table is populated correctly. (See Section 21.4.24, “ndb_restore — Restore an NDB Cluster Backup”, for more information.)

    执行ndb_restore,还原所有数据。运行ndb_restore时必须包含--restore epoch选项,以便正确填充ndb_apply_status表。(有关详细信息,请参阅第21.4.24节“ndb_restore-restore an ndb cluster backup”。)

  8. Restore the ndb_binlog_index table from the output of mysqldump and restore the binary log files from backup, if necessary.

    从mysqldump的输出还原ndb_binlog_索引表,必要时从备份还原二进制日志文件。

  9. Find the epoch applied most recently—that is, the maximum epoch column value in the ndb_apply_status table—as the user variable @LATEST_EPOCH (emphasized):

    找到最近应用的时期,即NbpUpApyyStand表中的最大历元列值作为用户变量@ Lestest-PixCh(强调):

    SELECT @LATEST_EPOCH:=MAX(epoch)
        FROM mysql.ndb_apply_status;
    
  10. Find the latest binary log file (@FIRST_FILE) and position (Position column value) within this file that correspond to @LATEST_EPOCH in the ndb_binlog_index table:

    查找此文件中与ndb binlog_索引表中的@latest_epoch对应的最新二进制日志文件(@first_file)和位置(position column value):

    SELECT Position, @FIRST_FILE:=File
        FROM mysql.ndb_binlog_index
        WHERE epoch > @LATEST_EPOCH ORDER BY epoch ASC LIMIT 1;
    
  11. Using mysqlbinlog, replay the binary log events from the given file and position up to the point of the failure. (See Section 4.6.7, “mysqlbinlog — Utility for Processing Binary Log Files”.)

    使用mysqlbinlog,从给定文件重放二进制日志事件,并定位到故障点。(见第4.6.7节,“mysqlbinlog-处理二进制日志文件的实用程序”。)

See also Section 7.5, “Point-in-Time (Incremental) Recovery Using the Binary Log”, for more information about the binary log, replication, and incremental recovery.

有关二进制日志、复制和增量恢复的详细信息,请参见第7.5节“使用二进制日志的时间点(增量)恢复”。

21.6.10 NDB Cluster Replication: Multi-Master and Circular Replication

It is possible to use NDB Cluster in multi-master replication, including circular replication between a number of NDB Clusters.

可以在多主复制中使用ndb集群,包括在多个ndb集群之间进行循环复制。

Circular replication example.  In the next few paragraphs we consider the example of a replication setup involving three NDB Clusters numbered 1, 2, and 3, in which Cluster 1 acts as the replication master for Cluster 2, Cluster 2 acts as the master for Cluster 3, and Cluster 3 acts as the master for Cluster 1. Each cluster has two SQL nodes, with SQL nodes A and B belonging to Cluster 1, SQL nodes C and D belonging to Cluster 2, and SQL nodes E and F belonging to Cluster 3.

循环复制示例。在接下来的几段中,我们考虑一个复制设置的示例,其中包含三个编号为1、2和3的ndb群集,其中群集1充当群集2的复制主机,群集2充当群集3的主机,群集3充当群集1的主机。每个集群有两个sql节点,其中sql节点a和b属于集群1,sql节点c和d属于集群2,sql节点e和f属于集群3。

Circular replication using these clusters is supported as long as the following conditions are met:

只要满足以下条件,就支持使用这些群集的循环复制:

  • The SQL nodes on all masters and slaves are the same.

    所有主服务器和从服务器上的sql节点都是相同的。

  • All SQL nodes acting as replication masters and slaves are started with the log_slave_updates system variable enabled.

    所有充当复制主节点和从节点的SQL节点都在启用LOG U SLAVE U UPDATES系统变量的情况下启动。

This type of circular replication setup is shown in the following diagram:

这种循环复制设置如下图所示:

Figure 21.47 NDB Cluster Circular Replication with All Masters As Slaves

图21.47以所有主设备为从设备的ndb集群循环复制

Content is described in the surrounding text.

In this scenario, SQL node A in Cluster 1 replicates to SQL node C in Cluster 2; SQL node C replicates to SQL node E in Cluster 3; SQL node E replicates to SQL node A. In other words, the replication line (indicated by the curved arrows in the diagram) directly connects all SQL nodes used as replication masters and slaves.

在这种情况下,集群1中的SQL节点A复制到集群2中的SQL节点C;SQL节点C复制到集群3中的SQL节点E;SQL节点E复制到SQL节点A。换句话说,复制行(由图中的弧形箭头指示)直接连接用作复制主节点和从节点的所有SQL节点。

It is also possible to set up circular replication in such a way that not all master SQL nodes are also slaves, as shown here:

也可以设置循环复制,这样并非所有主SQL节点都是从节点,如下所示:

Figure 21.48 NDB Cluster Circular Replication Where Not All Masters Are Slaves

图21.48 ndb集群循环复制,其中并非所有主节点都是从节点

Logic is described in the surrounding text. Here SQL node A in Cluster 1 replicates to SQL node C in Cluster 2; SQL node D in Cluster 2 replicates to SQL node F in Cluster 3; SQL node E in Cluster 3 replicates to SQL node B in Cluster 1.

In this case, different SQL nodes in each cluster are used as replication masters and slaves. However, you must not start any of the SQL nodes with the log_slave_updates system variable enabled. This type of circular replication scheme for NDB Cluster, in which the line of replication (again indicated by the curved arrows in the diagram) is discontinuous, should be possible, but it should be noted that it has not yet been thoroughly tested and must therefore still be considered experimental.

在这种情况下,每个集群中的不同sql节点用作复制主节点和从节点。但是,不能在启用LOG U SLAVE U UPDATES系统变量的情况下启动任何SQL节点。这种用于ndb集群的循环复制方案应该是可能的,其中复制线(图中的曲线箭头再次表示)是不连续的,但是应该注意,它还没有经过彻底的测试,因此仍然必须被认为是实验性的。

Using NDB-native backup and restore to initialize a slave NDB Cluster.  When setting up circular replication, it is possible to initialize the slave cluster by using the management client BACKUP command on one NDB Cluster to create a backup and then applying this backup on another NDB Cluster using ndb_restore. However, this does not automatically create binary logs on the second NDB Cluster 's SQL node acting as the replication slave. In order to cause the binary logs to be created, you must issue a SHOW TABLES statement on that SQL node; this should be done prior to running START SLAVE.

使用ndb native backup and restore初始化从ndb集群。设置循环复制时,可以通过在一个ndb群集上使用management client backup命令创建备份,然后使用ndb_restore在另一个ndb群集上应用此备份来初始化从属群集。但是,这不会在第二个ndb集群的sql节点(充当复制从节点)上自动创建二进制日志。为了创建二进制日志,必须在该sql节点上发出show tables语句;这应该在运行start slave之前完成。

This is a known issue which we intend to address in a future release.

这是一个已知的问题,我们打算在未来的版本中解决。

Multi-master failover example.  In this section, we discuss failover in a multi-master NDB Cluster replication setup with three NDB Clusters having server IDs 1, 2, and 3. In this scenario, Cluster 1 replicates to Clusters 2 and 3; Cluster 2 also replicates to Cluster 3. This relationship is shown here:

多主故障转移示例。在本节中,我们将讨论多主ndb群集复制设置中的故障转移,其中三个ndb群集具有服务器id 1、2和3。在这个场景中,集群1复制到集群2和3;集群2也复制到集群3。这种关系如下所示:

Figure 21.49 NDB Cluster Multi-Master Replication With 3 Masters

图21.49具有3个主机的ndb群集多主机复制

Multi-master NDB Cluster replication setup with three NDB Clusters having server IDs 1, 2, and 3; Cluster 1 replicates to Clusters 2 and 3; Cluster 2 also replicates to Cluster 3.

In other words, data replicates from Cluster 1 to Cluster 3 through 2 different routes: directly, and by way of Cluster 2.

换句话说,数据通过两种不同的路径从集群1复制到集群3:直接复制和通过集群2复制。

Not all MySQL servers taking part in multi-master replication must act as both master and slave, and a given NDB Cluster might use different SQL nodes for different replication channels. Such a case is shown here:

并非所有参与多主复制的mysql服务器都必须同时充当主服务器和从服务器,给定的ndb集群可能会为不同的复制通道使用不同的sql节点。这种情况如下所示:

Figure 21.50 NDB Cluster Multi-Master Replication, With MySQL Servers

图21.50使用MySQL服务器的NDB群集多主复制

Concepts are described in the surrounding text. Shows three nodes: SQL node A in Cluster 1 replicates to SQL node F in Cluster 3; SQL node B in Cluster 1 replicates to SQL node C in Cluster 2; SQL node E in Cluster 3 replicates to SQL node G in Cluster 3. SQL nodes A and B in cluster 1 have --log-slave-updates=0; SQL nodes C in Cluster 2, and SQL nodes F and G in Cluster 3 have --log-slave-updates=1; and SQL nodes D and E in Cluster 2 have --log-slave-updates=0.

MySQL servers acting as replication slaves must be run with the log_slave_updates system variable enabled. Which mysqld processes require this option is also shown in the preceding diagram.

作为复制从属服务器的MySQL服务器必须在启用LOG U SLAVE U UPDATES系统变量的情况下运行。前面的图表中还显示了哪些mysqld进程需要此选项。

Note

Using the log_slave_updates system variable has no effect on servers not being run as replication slaves.

使用LOG U SLAVE U UPDATES系统变量对不作为复制从机运行的服务器没有影响。

The need for failover arises when one of the replicating clusters goes down. In this example, we consider the case where Cluster 1 is lost to service, and so Cluster 3 loses 2 sources of updates from Cluster 1. Because replication between NDB Clusters is asynchronous, there is no guarantee that Cluster 3's updates originating directly from Cluster 1 are more recent than those received through Cluster 2. You can handle this by ensuring that Cluster 3 catches up to Cluster 2 with regard to updates from Cluster 1. In terms of MySQL servers, this means that you need to replicate any outstanding updates from MySQL server C to server F.

当一个复制群集关闭时,就需要进行故障转移。在本例中,我们考虑集群1失去服务,因此集群3失去来自集群1的2个更新源的情况。由于ndb集群之间的复制是异步的,因此无法保证直接从集群1发起的集群3更新比通过集群2接收的更新更新更新。您可以通过确保集群3根据集群1的更新赶上集群2来处理此问题。对于mysql服务器,这意味着您需要将任何未完成的更新从mysql服务器c复制到服务器f。

On server C, perform the following queries:

在服务器C上,执行以下查询:

mysqlC> SELECT @latest:=MAX(epoch)
     ->     FROM mysql.ndb_apply_status
     ->     WHERE server_id=1;

mysqlC> SELECT
     ->     @file:=SUBSTRING_INDEX(File, '/', -1),
     ->     @pos:=Position
     ->     FROM mysql.ndb_binlog_index
     ->     WHERE orig_epoch >= @latest
     ->     AND orig_server_id = 1
     ->     ORDER BY epoch ASC LIMIT 1;
Note

You can improve the performance of this query, and thus likely speed up failover times significantly, by adding the appropriate index to the ndb_binlog_index table. See Section 21.6.4, “NDB Cluster Replication Schema and Tables”, for more information.

通过将适当的索引添加到ndb-binlog-u索引表中,可以提高此查询的性能,从而显著加快故障转移时间。有关详细信息,请参阅21.6.4节“ndb群集复制架构和表”。

Copy over the values for @file and @pos manually from server C to server F (or have your application perform the equivalent). Then, on server F, execute the following CHANGE MASTER TO statement:

手动将@file和@pos的值从服务器c复制到服务器f(或者让您的应用程序执行等效的操作)。然后,在服务器f上执行以下change master to语句:

mysqlF> CHANGE MASTER TO
     ->     MASTER_HOST = 'serverC'
     ->     MASTER_LOG_FILE='@file',
     ->     MASTER_LOG_POS=@pos;

Once this has been done, you can issue a START SLAVE statement on MySQL server F, and any missing updates originating from server B will be replicated to server F.

完成此操作后,可以在mysql服务器f上发出start-slave语句,来自服务器b的任何丢失的更新都将复制到服务器f。

The CHANGE MASTER TO statement also supports an IGNORE_SERVER_IDS option which takes a comma-separated list of server IDs and causes events originating from the corresponding servers to be ignored. For more information, see Section 13.4.2.1, “CHANGE MASTER TO Syntax”, and Section 13.7.5.34, “SHOW SLAVE STATUS Syntax”. For information about how this option intereacts with the ndb_log_apply_status variable, see Section 21.6.8, “Implementing Failover with NDB Cluster Replication”.

change master to语句还支持ignore_server_ids选项,该选项接受以逗号分隔的服务器id列表,并导致忽略来自相应服务器的事件。有关详细信息,请参阅第13.4.2.1节“将主服务器更改为语法”和第13.7.5.34节“显示从服务器状态语法”。有关此选项如何与ndb_log_apply_status变量交互的信息,请参阅第21.6.8节“使用ndb群集复制实现故障转移”。

21.6.11 NDB Cluster Replication Conflict Resolution

When using a replication setup involving multiple masters (including circular replication), it is possible that different masters may try to update the same row on the slave with different data. Conflict resolution in NDB Cluster Replication provides a means of resolving such conflicts by permitting a user-defined resolution column to be used to determine whether or not an update on a given master should be applied on the slave.

当使用涉及多个主机(包括循环复制)的复制设置时,不同的主机可能会尝试用不同的数据更新从机上的同一行。ndb集群复制中的冲突解决提供了一种解决此类冲突的方法,方法是允许使用用户定义的解决列来确定是否应在从属服务器上应用给定主服务器上的更新。

Some types of conflict resolution supported by NDB Cluster (NDB$OLD(), NDB$MAX(), NDB$MAX_DELETE_WIN()) implement this user-defined column as a timestamp column (although its type cannot be TIMESTAMP, as explained later in this section). These types of conflict resolution are always applied a row-by-row basis rather than a transactional basis. The epoch-based conflict resolution functions NDB$EPOCH() and NDB$EPOCH_TRANS() compare the order in which epochs are replicated (and thus these functions are transactional). Different methods can be used to compare resolution column values on the slave when conflicts occur, as explained later in this section; the method used can be set on a per-table basis.

ndb cluster支持的某些冲突解决类型(ndb$old()、ndb$max()、ndb$max_delete_win())将此用户定义列实现为“时间戳”列(尽管其类型不能是时间戳,如本节后面所述)。这些类型的冲突解决始终是逐行应用的,而不是事务性应用的。基于epoch的冲突解决函数ndb$epoch()和ndb$epoch_trans()比较epoch的复制顺序(因此这些函数是事务性的)。当发生冲突时,可以使用不同的方法来比较从属服务器上的解析列值,如本节后面所述;所使用的方法可以基于每个表进行设置。

You should also keep in mind that it is the application's responsibility to ensure that the resolution column is correctly populated with relevant values, so that the resolution function can make the appropriate choice when determining whether to apply an update.

您还应该记住,应用程序有责任确保使用相关值正确填充resolution列,以便resolution函数在确定是否应用更新时做出适当的选择。

Requirements.  Preparations for conflict resolution must be made on both the master and the slave. These tasks are described in the following list:

要求。必须在主人和奴隶身上做好解决冲突的准备。以下列表描述了这些任务:

  • On the master writing the binary logs, you must determine which columns are sent (all columns or only those that have been updated). This is done for the MySQL Server as a whole by applying the mysqld startup option --ndb-log-updated-only (described later in this section) or on a per-table basis by entries in the mysql.ndb_replication table (see The ndb_replication system table).

    在写入二进制日志的主机上,必须确定发送哪些列(所有列或仅已更新的列)。这是通过应用mysqld startup选项(仅更新ndb日志(本节稍后介绍)或根据mysql.ndb_replication表中的条目(请参阅ndb_replication system表)对整个mysql服务器完成的。

    Note

    If you are replicating tables with very large columns (such as TEXT or BLOB columns), --ndb-log-updated-only can also be useful for reducing the size of the master and slave binary logs and avoiding possible replication failures due to exceeding max_allowed_packet.

    如果要复制具有非常大列(如文本列或blob列)的表,则--ndb log updated only对于减小主和从二进制日志的大小以及避免由于超过max_allowed_packet而导致的复制失败也很有用。

    See Section 16.4.1.19, “Replication and max_allowed_packet”, for more information about this issue.

    有关此问题的详细信息,请参阅第16.4.1.19节“复制和允许的最大数据包”。

  • On the slave, you must determine which type of conflict resolution to apply (latest timestamp wins, same timestamp wins, primary wins, primary wins, complete transaction, or none). This is done using the mysql.ndb_replication system table, on a per-table basis (see The ndb_replication system table).

    在从属服务器上,必须确定要应用哪种类型的冲突解决方案(“最新时间戳赢”、“相同时间戳赢”、“主赢”、“主赢、完成事务”或“无”)。这是在每个表的基础上使用mysql.ndb_replication system表完成的(请参阅ndb_replication system表)。

  • NDB Cluster also supports read conflict detection, that is, detecting conflicts between reads of a given row in one cluster and updates or deletes of the same row in another cluster. This requires exclusive read locks obtained by setting ndb_log_exclusive_reads equal to 1 on the slave. All rows read by a conflicting read are logged in the exceptions table. For more information, see Read conflict detection and resolution.

    ndb集群还支持读取冲突检测,即检测一个集群中给定行的读取与另一个集群中同一行的更新或删除之间的冲突。这需要通过在从机上将ndb_log_exclusive_reads设置为1来获得独占读锁。冲突读取读取的所有行都记录在异常表中。有关详细信息,请参阅读取冲突检测和解决。

When using the functions NDB$OLD(), NDB$MAX(), and NDB$MAX_DELETE_WIN() for timestamp-based conflict resolution, we often refer to the column used for determining updates as a timestamp column. However, the data type of this column is never TIMESTAMP; instead, its data type should be INT (INTEGER) or BIGINT. The timestamp column should also be UNSIGNED and NOT NULL.

当使用函数ndb$old()、ndb$max()和ndb$max_delete_win()来解决基于时间戳的冲突时,我们通常将用于确定更新的列称为“时间戳”列。但是,此列的数据类型永远不是timestamp;相反,它的数据类型应该是int(integer)或bigint。“timestamp”列也应该是无符号的,不能为空。

The NDB$EPOCH() and NDB$EPOCH_TRANS() functions discussed later in this section work by comparing the relative order of replication epochs applied on a primary and secondary NDB Cluster, and do not make use of timestamps.

本节后面讨论的ndb$epoch()和ndb$epoch_trans()函数通过比较应用于主ndb群集和辅助ndb群集的复制epoch的相对顺序来工作,并且不使用时间戳。

Master column control.  We can see update operations in terms of before and after images—that is, the states of the table before and after the update is applied. Normally, when updating a table with a primary key, the before image is not of great interest; however, when we need to determine on a per-update basis whether or not to use the updated values on a replication slave, we need to make sure that both images are written to the master's binary log. This is done with the --ndb-log-update-as-write option for mysqld, as described later in this section.

主列控件。我们可以看到“before”和“after”图像的更新操作,即应用更新前后表的状态。通常,当用主键更新表时,“before”映像不太重要;但是,当我们需要根据每次更新确定是否在复制从机上使用更新的值时,我们需要确保两个映像都写入主机的二进制日志。这是通过mysqld的--ndb log update as write选项完成的,如本节后面所述。

Important

Whether logging of complete rows or of updated columns only is done is decided when the MySQL server is started, and cannot be changed online; you must either restart mysqld, or start a new mysqld instance with different logging options.

是否只记录完整行或更新列是在mysql服务器启动时决定的,并且不能在线更改;您必须重新启动mysqld,或者使用不同的日志选项启动一个新的mysqld实例。

Logging Full or Partial Rows (--ndb-log-updated-only Option)

Property Value
Command-Line Format --ndb-log-updated-only[={OFF|ON}]
System Variable ndb_log_updated_only
Scope Global
Dynamic Yes
Type Boolean
Default Value ON

For purposes of conflict resolution, there are two basic methods of logging rows, as determined by the setting of the --ndb-log-updated-only option for mysqld:

为了解决冲突,根据mysqld的--ndb log updated only选项的设置,有两种记录行的基本方法:

  • Log complete rows

    记录完整行

  • Log only column data that has been updated—that is, column data whose value has been set, regardless of whether or not this value was actually changed. This is the default behavior.

    只记录已更新的列数据,即已设置其值的列数据,而不管该值是否实际更改。这是默认行为。

It is usually sufficient—and more efficient—to log updated columns only; however, if you need to log full rows, you can do so by setting --ndb-log-updated-only to 0 or OFF.

通常只记录更新的列就足够了,而且效率更高;但是,如果需要记录完整的行,可以通过将--ndb log updated only设置为0或关闭来实现。

--ndb-log-update-as-write Option: Logging Changed Data as Updates

Property Value
Command-Line Format --ndb-log-update-as-write[={OFF|ON}]
System Variable ndb_log_update_as_write
Scope Global
Dynamic Yes
Type Boolean
Default Value ON

The setting of the MySQL Server's --ndb-log-update-as-write option determines whether logging is performed with or without the before image. Because conflict resolution is done in the MySQL Server's update handler, it is necessary to control logging on the master such that updates are updates and not writes; that is, such that updates are treated as changes in existing rows rather than the writing of new rows (even though these replace existing rows). This option is turned on by default; in other words, updates are treated as writes. (That is, updates are by default written as write_row events in the binary log, rather than as update_row events.)

mysql服务器的--ndb log update as write选项的设置决定是否使用“before”映像执行日志记录。因为冲突解决是在MySQL服务器的更新处理程序中完成的,因此必须控制在主机上的日志记录,以便更新是更新而不是写入;也就是说,更新被视为现有行中的更改而不是新行的写入(即使这些替换了现有行)。默认情况下,此选项处于启用状态;换句话说,更新被视为写入。(也就是说,默认情况下,更新在二进制日志中写为write_row事件,而不是update_row事件。)

To turn off the option, start the master mysqld with --ndb-log-update-as-write=0 or --ndb-log-update-as-write=OFF. You must do this when replicating from NDB tables to tables using a different storage engine; see Replication from NDB to other storage engines, and Replication from NDB to a nontransactional storage engine, for more information.

要关闭该选项,请使用--ndb log update as write=0或--ndb log update as write=off启动主mysqld。当使用不同的存储引擎从ndb表复制到表时,必须执行此操作;有关详细信息,请参阅从ndb复制到其他存储引擎,以及从ndb复制到非事务存储引擎。

Conflict resolution control.  Conflict resolution is usually enabled on the server where conflicts can occur. Like logging method selection, it is enabled by entries in the mysql.ndb_replication table.

冲突解决控制。冲突解决通常在可能发生冲突的服务器上启用。与日志记录方法选择一样,它由mysql.ndb_复制表中的条目启用。

The ndb_replication system table.  To enable conflict resolution, it is necessary to create an ndb_replication table in the mysql system database on the master, the slave, or both, depending on the conflict resolution type and method to be employed. This table is used to control logging and conflict resolution functions on a per-table basis, and has one row per table involved in replication. ndb_replication is created and filled with control information on the server where the conflict is to be resolved. In a simple master-slave setup where data can also be changed locally on the slave this will typically be the slave. In a more complex master-master (2-way) replication schema this will usually be all of the masters involved. Each row in mysql.ndb_replication corresponds to a table being replicated, and specifies how to log and resolve conflicts (that is, which conflict resolution function, if any, to use) for that table. The definition of the mysql.ndb_replication table is shown here:

ndb_replication system表。要启用冲突解决,必须在mysql系统数据库中的主、从或两者上创建ndb_复制表,具体取决于要使用的冲突解决类型和方法。此表用于在每个表的基础上控制日志记录和冲突解决函数,并且在复制中每个表有一行。在要解决冲突的服务器上创建ndb_复制并填充控制信息。在一个简单的主从式设置中,数据也可以在从机上本地更改,这通常是从机。在更复杂的主控(双向)复制模式中,这通常是所有涉及的主控。ndb_replication中的每一行都对应于要复制的表,并指定如何记录和解决该表的冲突(即,要使用哪个冲突解决函数,如果有的话)。mysql.ndb_复制表的定义如下所示:

CREATE TABLE mysql.ndb_replication  (
    db VARBINARY(63),
    table_name VARBINARY(63),
    server_id INT UNSIGNED,
    binlog_type INT UNSIGNED,
    conflict_fn VARBINARY(128),
    PRIMARY KEY USING HASH (db, table_name, server_id)
)   ENGINE=NDB
PARTITION BY KEY(db,table_name);

The columns in this table are described in the next few paragraphs.

下几段将介绍此表中的列。

db.  The name of the database containing the table to be replicated. You may employ either or both of the wildcards _ and % as part of the database name. Matching is similar to what is implemented for the LIKE operator.

分贝。包含要复制的表的数据库的名称。您可以使用通配符和/或%作为数据库名称的一部分。匹配与为like运算符实现的类似。

table_name.  The name of the table to be replicated. The table name may include either or both of the wildcards _ and %. Matching is similar to what is implemented for the LIKE operator.

表名。要复制的表的名称。表名可以包含通配符和%中的一个或两个。匹配与为like运算符实现的类似。

server_id.  The unique server ID of the MySQL instance (SQL node) where the table resides.

服务器ID。表所在的MySQL实例(SQL节点)的唯一服务器ID。

binlog_type.  The type of binary logging to be employed. This is determined as shown in the following table:

binlog_类型。要使用的二进制日志记录的类型。具体如下表所示:

Table 21.408 binlog_type values, with internal values and descriptions

表21.408 binlog_类型值,带内部值和说明

Value Internal Value Description
0 NBT_DEFAULT Use server default
1 NBT_NO_LOGGING Do not log this table in the binary log
2 NBT_UPDATED_ONLY Only updated attributes are logged
3 NBT_FULL Log full row, even if not updated (MySQL server default behavior)
4 NBT_USE_UPDATE (For generating NBT_UPDATED_ONLY_USE_UPDATE and NBT_FULL_USE_UPDATE values only—not intended for separate use)
5 [Not used] ---
6 NBT_UPDATED_ONLY_USE_UPDATE (equal to NBT_UPDATED_ONLY | NBT_USE_UPDATE) Use updated attributes, even if values are unchanged
7 NBT_FULL_USE_UPDATE (equal to NBT_FULL | NBT_USE_UPDATE) Use full row, even if values are unchanged

conflict_fn.  The conflict resolution function to be applied. This function must be specified as one of those shown in the following list:

冲突。要应用的冲突解决功能。必须将此函数指定为以下列表中所示的函数之一:

These functions are described in the next few paragraphs.

下面几段将介绍这些功能。

NDB$OLD(column_name).  If the value of column_name is the same on both the master and the slave, then the update is applied; otherwise, the update is not applied on the slave and an exception is written to the log. This is illustrated by the following pseudocode:

NDB$旧(列名)。如果列名称的值在主服务器和从服务器上都相同,则应用更新;否则,更新不会应用于从服务器,并将异常写入日志。下面的伪代码说明了这一点:

if (master_old_column_value == slave_current_column_value)
  apply_update();
else
  log_exception();

This function can be used for same value wins conflict resolution. This type of conflict resolution ensures that updates are not applied on the slave from the wrong master.

此函数可用于“相同值获胜”冲突解决。这种类型的冲突解决方案可确保不会对来自错误主服务器的从属服务器应用更新。

Important

The column value from the master's before image is used by this function.

此函数使用来自主控形状“before”图像的列值。

NDB$MAX(column_name).  If the timestamp column value for a given row coming from the master is higher than that on the slave, it is applied; otherwise it is not applied on the slave. This is illustrated by the following pseudocode:

ndb$max(列名)。如果来自主节点的给定行的“timestamp”列值高于从节点的值,则应用该列;否则不应用于从节点。下面的伪代码说明了这一点:

if (master_new_column_value > slave_current_column_value)
  apply_update();

This function can be used for greatest timestamp wins conflict resolution. This type of conflict resolution ensures that, in the event of a conflict, the version of the row that was most recently updated is the version that persists.

此函数可用于“最大时间戳获胜”冲突解决。这种类型的冲突解决可确保在发生冲突时,最近更新的行的版本是持续存在的版本。

Important

The column value from the master's after image is used by this function.

此函数使用母版“after”图像中的列值。

NDB$MAX_DELETE_WIN().  This is a variation on NDB$MAX(). Due to the fact that no timestamp is available for a delete operation, a delete using NDB$MAX() is in fact processed as NDB$OLD. However, for some use cases, this is not optimal. For NDB$MAX_DELETE_WIN(), if the timestamp column value for a given row adding or updating an existing row coming from the master is higher than that on the slave, it is applied. However, delete operations are treated as always having the higher value. This is illustrated in the following pseudocode:

ndb$max_delete_win()。这是ndb$max()的变体。由于没有时间戳可用于删除操作,使用ndb$max()的删除实际上被当作ndb$old处理。但是,对于某些用例,这不是最优的。对于NDB $Max MyDeleTeWin(),如果给定行添加或更新来自主的现有行的“时间戳”列值高于从属行的“时间戳”列值,则应用该值。但是,删除操作始终被视为具有较高的值。下面的伪代码说明了这一点:

if ( (master_new_column_value > slave_current_column_value)
        ||
      operation.type == "delete")
  apply_update();

This function can be used for greatest timestamp, delete wins conflict resolution. This type of conflict resolution ensures that, in the event of a conflict, the version of the row that was deleted or (otherwise) most recently updated is the version that persists.

此函数可用于“最大时间戳,删除获胜”冲突解决。这种类型的冲突解决可确保在发生冲突时,已删除或(其他)最近更新的行的版本是持续存在的版本。

Note

As with NDB$MAX(), the column value from the master's after image is the value used by this function.

与ndb$max()一样,master的“after”图像中的列值是此函数使用的值。

NDB$EPOCH() and NDB$EPOCH_TRANS().  The NDB$EPOCH() function tracks the order in which replicated epochs are applied on a slave NDB Cluster relative to changes originating on the slave. This relative ordering is used to determine whether changes originating on the slave are concurrent with any changes that originate locally, and are therefore potentially in conflict.

ndb$epoch()和ndb$epoch_trans()。ndb$epoch()函数跟踪复制的epoch相对于从节点上的更改应用于从节点ndb群集的顺序。此相对顺序用于确定从机上发起的更改是否与本地发起的任何更改并发,因此可能发生冲突。

Most of what follows in the description of NDB$EPOCH() also applies to NDB$EPOCH_TRANS(). Any exceptions are noted in the text.

ndb$epoch()描述中的大部分内容也适用于ndb$epoch_trans()。任何例外都在正文中注明。

NDB$EPOCH() is asymmetric, operating on one NDB Cluster in a two-cluster circular replication configuration (sometimes referred to as active-active replication). We refer here to cluster on which it operates as the primary, and the other as the secondary. The slave on the primary is responsible for detecting and handling conflicts, while the slave on the secondary is not involved in any conflict detection or handling.

ndb$epoch()是非对称的,在双群集循环复制配置(有时称为“活动-活动”复制)中的一个ndb群集上操作。这里我们指的是它作为主集群运行,而另一个作为辅助集群运行的集群。主服务器上的从服务器负责检测和处理冲突,而辅助服务器上的从服务器不参与任何冲突检测或处理。

When the slave on the primary detects conflicts, it injects events into its own binary log to compensate for these; this ensures that the secondary NDB Cluster eventually realigns itself with the primary and so keeps the primary and secondary from diverging. This compensation and realignment mechanism requires that the primary NDB Cluster always wins any conflicts with the secondary—that is, that the primary's changes are always used rather than those from the secondary in event of a conflict. This primary always wins rule has the following implications:

当主节点上的从节点检测到冲突时,它会将事件注入到自己的二进制日志中以补偿这些冲突;这将确保次ndb集群最终与主节点重新对齐,从而防止主节点和次节点分离。这种补偿和重新调整机制要求主ndb集群总是赢得与次ndb的任何冲突,也就是说,在发生冲突时总是使用主ndb集群的更改,而不是来自次ndb集群的更改。这一“初选总是获胜”规则具有以下含义:

  • Operations that change data, once committed on the primary, are fully persistent and will not be undone or rolled back by conflict detection and resolution.

    在主服务器上提交更改数据的操作是完全持久的,冲突检测和解决不会撤消或回滚。

  • Data read from the primary is fully consistent. Any changes committed on the Primary (locally or from the slave) will not be reverted later.

    从主服务器读取的数据完全一致。在主服务器(本地或从服务器)上提交的任何更改稍后都不会还原。

  • Operations that change data on the secondary may later be reverted if the primary determines that they are in conflict.

    如果主服务器确定次服务器上的数据发生冲突,则以后可以还原更改这些数据的操作。

  • Individual rows read on the secondary are self-consistent at all times, each row always reflecting either a state committed by the secondary, or one committed by the primary.

    在辅助服务器上读取的每一行始终是自一致的,每一行总是反映辅助服务器提交的状态或主服务器提交的状态。

  • Sets of rows read on the secondary may not necessarily be consistent at a given single point in time. For NDB$EPOCH_TRANS(), this is a transient state; for NDB$EPOCH(), it can be a persistent state.

    在辅助服务器上读取的行集在给定的单个时间点上可能不一定一致。对于ndb$epoch_trans(),这是一个瞬态;对于ndb$epoch(),它可以是一个持久状态。

  • Assuming a period of sufficient length without any conflicts, all data on the secondary NDB Cluster (eventually) becomes consistent with the primary's data.

    假设一个足够长的时间段没有任何冲突,则次ndb集群上的所有数据(最终)都将与主ndb集群上的数据保持一致。

NDB$EPOCH() and NDB$EPOCH_TRANS() do not require any user schema modifications, or application changes to provide conflict detection. However, careful thought must be given to the schema used, and the access patterns used, to verify that the complete system behaves within specified limits.

ndb$epoch()和ndb$epoch_trans()不需要任何用户架构修改或应用程序更改来提供冲突检测。但是,必须仔细考虑使用的模式和使用的访问模式,以验证整个系统是否在指定的限制内运行。

Each of the NDB$EPOCH() and NDB$EPOCH_TRANS() functions can take an optional parameter; this is the number of bits to use to represent the lower 32 bits of the epoch, and should be set to no less than

每个ndb$epoch()和ndb$epoch_trans()函数都可以使用一个可选参数;这是用来表示epoch的低位32位的位数,应该设置为不小于

CEIL( LOG2( TimeBetweenGlobalCheckpoints / TimeBetweenEpochs ), 1)

For the default values of these configuration parameters (2000 and 100 milliseconds, respectively), this gives a value of 5 bits, so the default value (6) should be sufficient, unless other values are used for TimeBetweenGlobalCheckpoints, TimeBetweenEpochs, or both. A value that is too small can result in false positives, while one that is too large could lead to excessive wasted space in the database.

对于这些配置参数的默认值(分别为2000毫秒和100毫秒),这将给出5位的值,因此默认值(6)应该足够,除非其他值用于全局检查点之间的时间、或两者之间的时间。值太小可能导致误报,而值太大可能导致数据库中的空间过度浪费。

Both NDB$EPOCH() and NDB$EPOCH_TRANS() insert entries for conflicting rows into the relevant exceptions tables, provided that these tables have been defined according to the same exceptions table schema rules as described elsewhere in this section (see NDB$OLD(column_name)). You need to create any exceptions table before creating the table with which it is to be used.

ndb$epoch()和ndb$epoch_trans()都将冲突行的条目插入到相关的异常表中,前提是这些表是根据本节其他地方描述的相同异常表架构规则定义的(请参见ndb$old(column_name))。在创建要与之一起使用的表之前,需要创建任何异常表。

As with the other conflict detection functions discussed in this section, NDB$EPOCH() and NDB$EPOCH_TRANS() are activated by including relevant entries in the mysql.ndb_replication table (see The ndb_replication system table). The roles of the primary and secondary NDB Clusters in this scenario are fully determined by mysql.ndb_replication table entries.

与本节讨论的其他冲突检测函数一样,ndb$epoch()和ndb$epoch_trans()通过在mysql.ndb_replication表中包含相关条目来激活(请参阅ndb_replication system表)。此场景中主ndb群集和辅助ndb群集的角色完全由mysql.ndb_replication表项决定。

Because the conflict detection algorithms employed by NDB$EPOCH() and NDB$EPOCH_TRANS() are asymmetric, you must use different values for the primary slave's and secondary slave's server_id entries.

由于ndb$epoch()和ndb$epoch_trans()使用的冲突检测算法是非对称的,因此必须对主从机和从机的服务器ID项使用不同的值。

A conflict between DELETE operations alone is not sufficient to trigger a conflict using NDB$EPOCH() or NDB$EPOCH_TRANS(), and the relative placement within epochs does not matter. (Bug #18459944)

仅删除操作之间的冲突不足以使用ndb$epoch()或ndb$epoch_trans()触发冲突,而epoch内的相对位置无关紧要。(错误18459944)

Conflict detection status variables.  Several status variables can be used to monitor conflict detection. You can see how many rows have been found in conflict by NDB$EPOCH() since this slave was last restarted from the current value of the Ndb_conflict_fn_epoch system status variable.

冲突检测状态变量。可以使用多个状态变量来监视冲突检测。您可以看到自上次从ndb_u conflict戋fn戋u epoch系统状态变量的当前值重新启动以来,ndb$epoch()发现有多少行发生冲突。

Ndb_conflict_fn_epoch_trans provides the number of rows that have been found directly in conflict by NDB$EPOCH_TRANS(). Ndb_conflict_fn_epoch2 and Ndb_conflict_fn_epoch2_trans show the number of rows found in conflict by NDB$EPOCH2() and NDB$EPOCH2_TRANS(), respectively. The number of rows actually realigned, including those affected due to their membership in or dependency on the same transactions as other conflicting rows, is given by Ndb_conflict_trans_row_reject_count.

ndb_conflict_fn_epoch_trans提供ndb$epoch_trans()在冲突中直接找到的行数。ndb_conflict_fn_epoch2和ndb_conflict_epoch2_trans分别显示ndb$epoch2()和ndb$epoch2_trans()在冲突中找到的行数。实际重新调整的行数,包括由于其成员身份或依赖于与其他冲突行相同的事务而受影响的行数,由ndb_conflict_trans_row_reject_count给出。

For more information, see Section 21.3.3.9.3, “NDB Cluster Status Variables”.

有关更多信息,请参阅第21.3.3.9.3节“ndb集群状态变量”。

Limitations on NDB$EPOCH().  The following limitations currently apply when using NDB$EPOCH() to perform conflict detection:

对ndb$epoch()的限制。使用ndb$epoch()执行冲突检测时,当前应用以下限制:

  • Conflicts are detected using NDB Cluster epoch boundaries, with granularity proportional to TimeBetweenEpochs (default: 100 milliseconds). The minimum conflict window is the minimum time during which concurrent updates to the same data on both clusters always report a conflict. This is always a nonzero length of time, and is roughly proportional to 2 * (latency + queueing + TimeBetweenEpochs). This implies that—assuming the default for TimeBetweenEpochs and ignoring any latency between clusters (as well as any queuing delays)—the minimum conflict window size is approximately 200 milliseconds. This minimum window should be considered when looking at expected application race patterns.

    使用ndb cluster epoch边界检测冲突,粒度与timebeteepochs成比例(默认值:100毫秒)。最小冲突窗口是对两个集群上相同数据的并发更新始终报告冲突的最短时间。这总是一个非零的时间长度,并且大致与2*(延迟+排队+时间间隔)成正比。这意味着,假设默认为0,忽略集群之间的任何延迟(以及任何排队延迟)——最小冲突窗口大小大约为200毫秒。在查看预期的应用程序“竞争”模式时,应考虑此最小窗口。

  • Additional storage is required for tables using the NDB$EPOCH() and NDB$EPOCH_TRANS() functions; from 1 to 32 bits extra space per row is required, depending on the value passed to the function.

    使用ndb$epoch()和ndb$epoch_trans()函数的表需要额外的存储空间;根据传递给函数的值,每行需要1到32位的额外空间。

  • Conflicts between delete operations may result in divergence between the primary and secondary. When a row is deleted on both clusters concurrently, the conflict can be detected, but is not recorded, since the row is deleted. This means that further conflicts during the propagation of any subsequent realignment operations will not be detected, which can lead to divergence.

    删除操作之间的冲突可能导致主操作和次操作之间的差异。在两个集群上同时删除一行时,可以检测到冲突,但不会记录冲突,因为该行已被删除。这意味着,在随后的任何重新调整操作的传播过程中,不会检测到进一步的冲突,这可能会导致分歧。

    Deletes should be externally serialized, or routed to one cluster only. Alternatively, a separate row should be updated transactionally with such deletes and any inserts that follow them, so that conflicts can be tracked across row deletes. This may require changes in applications.

    删除应在外部序列化,或仅路由到一个群集。或者,应使用此类删除和其后的任何插入以事务方式更新单独的行,以便可以跨行删除跟踪冲突。这可能需要更改应用程序。

  • Only two NDB Clusters in a circular active-active configuration are currently supported when using NDB$EPOCH() or NDB$EPOCH_TRANS() for conflict detection.

    使用ndb$epoch()或ndb$epoch_trans()进行冲突检测时,当前仅支持循环“活动-活动”配置中的两个ndb群集。

  • Tables having BLOB or TEXT columns are not currently supported with NDB$EPOCH() or NDB$EPOCH_TRANS().

    ndb$epoch()或ndb$epoch_trans()当前不支持具有blob列或文本列的表。

NDB$EPOCH_TRANS().  NDB$EPOCH_TRANS() extends the NDB$EPOCH() function. Conflicts are detected and handled in the same way using the primary wins all rule (see NDB$EPOCH() and NDB$EPOCH_TRANS()) but with the extra condition that any other rows updated in the same transaction in which the conflict occurred are also regarded as being in conflict. In other words, where NDB$EPOCH() realigns individual conflicting rows on the secondary, NDB$EPOCH_TRANS() realigns conflicting transactions.

ndb$epoch_trans().ndb$epoch_trans()扩展了ndb$epoch()函数。使用“Primary Wins All”规则(请参阅ndb$epoch()和ndb$epoch_trans())以相同的方式检测和处理冲突,但附加条件是,在发生冲突的同一事务中更新的任何其他行也被视为处于冲突中。换言之,如果ndb$epoch()重新排列辅助上的单个冲突行,则ndb$epoch_trans()重新排列冲突事务。

In addition, any transactions which are detectably dependent on a conflicting transaction are also regarded as being in conflict, these dependencies being determined by the contents of the secondary cluster's binary log. Since the binary log contains only data modification operations (inserts, updates, and deletes), only overlapping data modifications are used to determine dependencies between transactions.

此外,任何可检测到依赖于冲突事务的事务也被视为处于冲突中,这些依赖关系由辅助群集二进制日志的内容确定。由于二进制日志只包含数据修改操作(插入、更新和删除),因此只使用重叠的数据修改来确定事务之间的依赖关系。

NDB$EPOCH_TRANS() is subject to the same conditions and limitations as NDB$EPOCH(), and in addition requires that Version 2 binary log row events are used (log_bin_use_v1_row_events equal to 0), which adds a storage overhead of 2 bytes per event in the binary log. In addition, all transaction IDs must be recorded in the secondary's binary log (--ndb-log-transaction-id option), which adds a further variable overhead (up to 13 bytes per row).

ndb$epoch_trans()受与ndb$epoch()相同的条件和限制,此外还要求使用版本2二进制日志行事件(log_bin_use_v1_row_events等于0),这将在二进制日志中为每个事件添加2字节的存储开销。此外,所有事务ID都必须记录在辅助数据库的二进制日志(--ndb log transaction id选项)中,这会增加额外的变量开销(每行最多13字节)。

See NDB$EPOCH() and NDB$EPOCH_TRANS().

请参见ndb$epoch()和ndb$epoch_trans()。

Status information.  A server status variable Ndb_conflict_fn_max provides a count of the number of times that a row was not applied on the current SQL node due to greatest timestamp wins conflict resolution since the last time that mysqld was started.

状态信息。服务器状态变量ndb_conflict_fn_max提供自上次启动mysqld以来由于“最大时间戳获胜”冲突解决而未在当前sql节点上应用行的次数。

The number of times that a row was not applied as the result of same timestamp wins conflict resolution on a given mysqld since the last time it was restarted is given by the global status variable Ndb_conflict_fn_old. In addition to incrementing Ndb_conflict_fn_old, the primary key of the row that was not used is inserted into an exceptions table, as explained later in this section.

自上次重新启动以来,由于给定mysqld上的“相同时间戳赢”冲突解决而未应用行的次数由全局状态变量ndb_conflict_fn_old给出。除了递增ndb_conflict_fn_old之外,还将未使用的行的主键插入到异常表中,如本节后面所述。

NDB$EPOCH2().  The NDB$EPOCH2() function is similar to NDB$EPOCH(), except that NDB$EPOCH2() provides for delete-delete handling with a circular replication (master-master) topology. In this scenario, primary and secondary roles are assigned to the two masters by setting the ndb_slave_conflict_role system variable to the appropriate value on each master (usually one each of PRIMARY, SECONDARY). When this is done, modifications made by the secondary are reflected by the primary back to the secondary which then conditionally applies them.

国家开发银行$epoch2()。ndb$epoch2()函数与ndb$epoch()类似,只是ndb$epoch2()提供了循环复制(“主控”)拓扑的删除删除处理。在这种情况下,通过将ndb_slave_conflict_role系统变量设置为每个主设备上的适当值(通常是主设备和辅助设备中的一个),将主设备和辅助设备分配给两个主设备。完成此操作后,次对象所做的修改将由主对象反射回次对象,然后再由次对象有条件地应用这些修改。

NDB$EPOCH2_TRANS().  NDB$EPOCH2_TRANS() extends the NDB$EPOCH2() function. Conflicts are detected and handled in the same way, and assigning primary and secondary roles to the replicating clusters, but with the extra condition that any other rows updated in the same transaction in which the conflict occurred are also regarded as being in conflict. That is, NDB$EPOCH2() realigns individual conflicting rows on the secondary, while NDB$EPOCH_TRANS() realigns conflicting transactions.

ndb$epoch2_trans().ndb$epoch2_trans()扩展了ndb$epoch2()函数。以相同的方式检测和处理冲突,并将主角色和辅助角色分配给复制群集,但有一个额外的条件,即在发生冲突的同一事务中更新的任何其他行也被视为处于冲突中。也就是说,ndb$epoch2()重新排列辅助上的单个冲突行,而ndb$epoch_trans()重新排列冲突事务。

Where NDB$EPOCH() and NDB$EPOCH_TRANS() use metadata that is specified per row, per last modified epoch, to determine on the primary whether an incoming replicated row change from the secondary is concurrent with a locally committed change; concurrent changes are regarded as conflicting, with subesequent exceptions table updates and realignment of the secondary. A problem arises when a row is deleted on the primary so there is no longer any last-modified epoch available to determine whether any replicated operations conflict, which means that conflicting delete operationss are not detected. This can result in divergence, an example being a delete on one cluster which is concurrent with a delete and insert on the other; this why delete operations can be routed to only one cluster when using NDB$EPOCH() and NDB$EPOCH_TRANS().

其中,ndb$epoch()和ndb$epoch_trans()使用按行、按上次修改的epoch指定的元数据,在主节点上确定从辅助节点传入的复制行更改是否与本地提交的更改并发;并发更改被视为冲突,并更新和重新调整第二个。当在主节点上删除一行时会出现问题,因此不再有任何最后修改的epoch可用于确定是否有任何复制操作发生冲突,这意味着未检测到冲突的删除操作。这可能会导致分歧,例如一个集群上的delete与另一个集群上的delete和insert是并行的;这就是为什么在使用ndb$epoch()和ndb$epoch_trans()时,delete操作只能路由到一个集群的原因。

NDB$EPOCH2() bypasses the issue just described—storing information about deleted rows on the PRIMARY—by ignoring any delete-delete conflict, and by avoiding any potential resultant divergence as well. This is accomplished by reflecting any operation successfully applied on and replicated from the secondary back to the secondary. On its return to the secondary, it can be used to reapply an operation on the secondary which was deleted by an operation originating from the primary.

ndb$epoch2()通过忽略任何删除-删除冲突并避免任何潜在的结果分歧,绕过了刚才描述的在主节点上存储有关已删除行的信息的问题。这是通过反映成功应用于次服务器并从次服务器复制回次服务器的任何操作来实现的。在返回到次要服务器时,可以使用它在次要服务器上重新应用由来自主要服务器的操作删除的操作。

When using NDB$EPOCH2(), you should keep in mind that the secondary applies the delete from the primary, removing the new row until it is restored by a reflected operation. In theory, the subsequent insert or update on the secondary conflicts with the delete from the primary, but in this case, we choose to ignore this and allow the secondary to win, in the interest of preventing divergence between the clusters. In other words, after a delete, the primary does not detect conflicts, and instead adopts the secondary's following changes immediately. Because of this, the secondary's state can revisit multiple previous committed states as it progresses to a final (stable) state, and some of these may be visible.

当使用ndb$epoch2()时,您应该记住,次要服务器将从主要服务器应用删除,删除新行,直到通过反射操作还原新行为止。从理论上讲,后续对二次插入或更新与从一次删除冲突,但在这种情况下,我们选择忽略这一点,允许二次“获胜”,以防止集群之间的分歧。换句话说,在删除之后,主节点不会检测到冲突,而是立即采用次节点的以下更改。因此,在进入最终(稳定)状态时,次要服务器的状态可以重新访问以前提交的多个状态,其中一些状态可能是可见的。

You should also be aware that reflecting all operations from the secondary back to the primary increases the size of the primary's logbinary log, as well as demands on bandwidth, CPU usage, and disk I/O.

您还应该知道,将所有操作从次要服务器反射回主要服务器会增加主要服务器的日志二进制日志的大小,以及对带宽、CPU使用率和磁盘I/O的要求。

Application of reflected operations on the secondary depends on the state of the target row on the secondary. Whether or not reflected changes are applied on the secondary can be tracked by checking the Ndb_conflict_reflected_op_prepare_count and Ndb_conflict_reflected_op_discard_count status variables. The number of changes applied is simply the difference between these two values (note that Ndb_conflict_reflected_op_prepare_count is always greater than or equal to Ndb_conflict_reflected_op_discard_count).

在次要服务器上应用反射操作取决于次要服务器上目标行的状态。可以通过检查“ndb_conflict_reflected_op_prepare_count”和“ndb_conflict_reflected_op_discard_count”状态变量来跟踪是否在辅助设备上应用了反映的更改。应用的更改数只是这两个值之间的差异(请注意,ndb_conflict_reflected_op_prepare_count始终大于或等于ndb_conflict_reflected_op_discard_count)。

Events are applied if and only if both of the following conditions are true:

只有当且仅当以下两个条件均为真时,才应用事件:

  • The existence of the row—that is, whether or not it exists—is in accordance with the type of event. For delete and update operations, the row must already exist. For insert operations, the row must not exist.

    是否存在的行的存在与事件的类型一致。对于删除和更新操作,行必须已经存在。对于插入操作,行不存在。

  • The row was last modified by the primary. It is possible that the modification was accomplished through the execution of a reflected operation.

    该行最后一次被主行修改。修改可能是通过执行反射操作来完成的。

If both of the conditions are not met, the reflected operation is discarded by the secondary.

如果两个条件都不满足,则次映像操作将丢弃。

Conflict resolution exceptions table.  To use the NDB$OLD() conflict resolution function, it is also necessary to create an exceptions table corresponding to each NDB table for which this type of conflict resolution is to be employed. This is also true when using NDB$EPOCH() or NDB$EPOCH_TRANS(). The name of this table is that of the table for which conflict resolution is to be applied, with the string $EX appended. (For example, if the name of the original table is mytable, the name of the corresponding exceptions table name should be mytable$EX.) The syntax for creating the exceptions table is as shown here:

冲突解决异常表。要使用ndb$old()冲突解决函数,还需要创建一个异常表,对应于要使用这种冲突解决方法的每个ndb表。使用ndb$epoch()或ndb$epoch_trans()时也是如此。此表的名称是要应用冲突解决的表的名称,并附加字符串$ex。(例如,如果原始表的名称为mytable,则对应的异常表名称应为mytable$ex。)创建异常表的语法如下所示:

CREATE TABLE original_table$EX  (
    [NDB$]server_id INT UNSIGNED,
    [NDB$]master_server_id INT UNSIGNED,
    [NDB$]master_epoch BIGINT UNSIGNED,
    [NDB$]count INT UNSIGNED,

    [NDB$OP_TYPE ENUM('WRITE_ROW','UPDATE_ROW', 'DELETE_ROW',
      'REFRESH_ROW', 'READ_ROW') NOT NULL,]
    [NDB$CFT_CAUSE ENUM('ROW_DOES_NOT_EXIST', 'ROW_ALREADY_EXISTS',
      'DATA_IN_CONFLICT', 'TRANS_IN_CONFLICT') NOT NULL,]
    [NDB$ORIG_TRANSID BIGINT UNSIGNED NOT NULL,]

    original_table_pk_columns,

    [orig_table_column|orig_table_column$OLD|orig_table_column$NEW,]

    [additional_columns,]

    PRIMARY KEY([NDB$]server_id, [NDB$]master_server_id, [NDB$]master_epoch, [NDB$]count)
) ENGINE=NDB;

The first four columns are required. The names of the first four columns and the columns matching the original table's primary key columns are not critical; however, we suggest for reasons of clarity and consistency, that you use the names shown here for the server_id, master_server_id, master_epoch, and count columns, and that you use the same names as in the original table for the columns matching those in the original table's primary key.

前四列是必需的。前四列和与原始表主键列匹配的列的名称并不重要;但是,出于清晰和一致性的原因,我们建议您使用此处显示的服务器id、主服务器id、主纪元和计数列的名称,并使用与原始表中相同的名称。与原始表主键中的列匹配的列。

If the exceptions table uses one or more of the optional columns NDB$OP_TYPE, NDB$CFT_CAUSE, or NDB$ORIG_TRANSID discussed later in this section, then each of the required columns must also be named using the prefix NDB$. If desired, you can use the NDB$ prefix to name the required columns even if you do not define any optional columns, but in this case, all four of the required columns must be named using the prefix.

如果异常表使用本节后面讨论的一个或多个可选列ndb$op_type、ndb$cft_cause或ndb$orig_transid,则还必须使用前缀ndb$命名每个必需列。如果需要,即使没有定义任何可选列,也可以使用ndb$前缀命名所需列,但在这种情况下,所有四个所需列都必须使用前缀命名。

Following these columns, the columns making up the original table's primary key should be copied in the order in which they are used to define the primary key of the original table. The data types for the columns duplicating the primary key columns of the original table should be the same as (or larger than) those of the original columns. A subset of the primary key columns may be used.

在这些列之后,构成原始表主键的列应按照用于定义原始表主键的顺序进行复制。复制原始表主键列的列的数据类型应与原始列的数据类型相同(或大于原始列的数据类型)。可以使用主键列的子集。

Regardless of the NDB Cluster version employed, the exceptions table must use the NDB storage engine. (An example that uses NDB$OLD() with an exceptions table is shown later in this section.)

无论使用何种ndb集群版本,异常表都必须使用ndb存储引擎。(本节后面将显示一个使用ndb$old()和异常表的示例。)

Additional columns may optionally be defined following the copied primary key columns, but not before any of them; any such extra columns cannot be NOT NULL. NDB Cluster supports three additional, predefined optional columns NDB$OP_TYPE, NDB$CFT_CAUSE, and NDB$ORIG_TRANSID, which are described in the next few paragraphs.

可以选择在复制的主键列之后定义附加列,但不能在其中任何列之前定义;任何此类附加列都不能为空。ndb cluster支持另外三个预定义的可选列ndb$op_type、ndb$cft_cause和ndb$orig_transid,这些列将在下面几段中介绍。

NDB$OP_TYPE: This column can be used to obtain the type of operation causing the conflict. If you use this column, define it as shown here:

ndb$op_type:此列可用于获取导致冲突的操作类型。如果使用此列,请按如下所示定义它:

NDB$OP_TYPE ENUM('WRITE_ROW', 'UPDATE_ROW', 'DELETE_ROW',
    'REFRESH_ROW', 'READ_ROW') NOT NULL

The WRITE_ROW, UPDATE_ROW, and DELETE_ROW operation types represent user-initiated operations. REFRESH_ROW operations are operations generated by conflict resolution in compensating transactions sent back to the originating cluster from the cluster that detected the conflict. READ_ROW operations are user-initiated read tracking operations defined with exclusive row locks.

write_row、update_row和delete_row操作类型表示用户发起的操作。刷新行操作是在补偿从检测到冲突的群集发送回原始群集的事务时由冲突解决生成的操作。读取行操作是由用户发起的读取跟踪操作,由排他行锁定义。

NDB$CFT_CAUSE: You can define an optional column NDB$CFT_CAUSE which provides the cause of the registered conflict. This column, if used, is defined as shown here:

ndb$cft_cause:您可以定义一个可选列ndb$cft_cause,该列提供已注册冲突的原因。此列(如果使用)的定义如下所示:

NDB$CFT_CAUSE ENUM('ROW_DOES_NOT_EXIST', 'ROW_ALREADY_EXISTS',
    'DATA_IN_CONFLICT', 'TRANS_IN_CONFLICT') NOT NULL

ROW_DOES_NOT_EXIST can be reported as the cause for UPDATE_ROW and WRITE_ROW operations; ROW_ALREADY_EXISTS can be reported for WRITE_ROW events. DATA_IN_CONFLICT is reported when a row-based conflict function detects a conflict; TRANS_IN_CONFLICT is reported when a transactional conflict function rejects all of the operations belonging to a complete transaction.

行不存在可以报告为更新行和写入行操作的原因;行已经存在可以报告为写入行事件。当基于行的冲突函数检测到冲突时,将报告data_in_conflict;当事务冲突函数拒绝属于完整事务的所有操作时,将报告trans_in_conflict。

NDB$ORIG_TRANSID: The NDB$ORIG_TRANSID column, if used, contains the ID of the originating transaction. This column should be defined as follows:

ndb$orig_transid:ndb$orig_transid列(如果使用)包含发起事务的id。本栏定义如下:

NDB$ORIG_TRANSID BIGINT UNSIGNED NOT NULL

NDB$ORIG_TRANSID is a 64-bit value generated by NDB. This value can be used to correlate multiple exceptions table entries belonging to the same conflicting transaction from the same or different exceptions tables.

ndb$orig_transid是由ndb生成的64位值。此值可用于关联来自相同或不同异常表的属于同一冲突事务的多个异常表条目。

Additional reference columns which are not part of the original table's primary key can be named colname$OLD or colname$NEW. colname$OLD references old values in update and delete operations—that is, operations containing DELETE_ROW events. colname$NEW can be used to reference new values in insert and update operations—in other words, operations using WRITE_ROW events, UPDATE_ROW events, or both types of events. Where a conflicting operation does not supply a value for a given non-primary-key reference column, the exceptions table row contains either NULL, or a defined default value for that column.

不属于原始表主键的其他引用列可以命名为colname$old或colname$new。colname$old引用更新和删除操作(即包含删除行事件的操作)中的旧值。colname$new可用于引用insert和update操作中的新值,换句话说,使用write-row事件、update-row事件或这两种类型的事件的操作。如果冲突操作未为给定的非主键引用列提供值,则异常表行包含空值或该列的已定义默认值。

Important

The mysql.ndb_replication table is read when a data table is set up for replication, so the row corresponding to a table to be replicated must be inserted into mysql.ndb_replication before the table to be replicated is created.

在设置要复制的数据表时,将读取mysql.ndb_复制表,因此在创建要复制的表之前,必须将与要复制的表对应的行插入mysql.ndb_复制中。

Examples

The following examples assume that you have already a working NDB Cluster replication setup, as described in Section 21.6.5, “Preparing the NDB Cluster for Replication”, and Section 21.6.6, “Starting NDB Cluster Replication (Single Replication Channel)”.

以下示例假设您已经有了一个正常工作的ndb群集复制设置,如21.6.5节“准备ndb群集进行复制”和21.6.6节“启动ndb群集复制(单个复制通道)”中所述。

NDB$MAX() example.  Suppose you wish to enable greatest timestamp wins conflict resolution on table test.t1, using column mycol as the timestamp. This can be done using the following steps:

ndb$max()示例。假设您希望在表test.t1上启用“grest timestamp wins”冲突解决,使用列mycol作为“timestamp”。这可以通过以下步骤完成:

  1. Make sure that you have started the master mysqld with --ndb-log-update-as-write=OFF.

    确保您已经使用--ndb log update as write=off启动了主mysqld。

  2. On the master, perform this INSERT statement:

    在主机上,执行以下插入语句:

    INSERT INTO mysql.ndb_replication
        VALUES ('test', 't1', 0, NULL, 'NDB$MAX(mycol)');
    

    Inserting a 0 into the server_id indicates that all SQL nodes accessing this table should use conflict resolution. If you want to use conflict resolution on a specific mysqld only, use the actual server ID.

    在服务器ID中插入0表示访问此表的所有SQL节点都应使用冲突解决方法。如果只想在特定mysqld上使用冲突解决,请使用实际的服务器id。

    Inserting NULL into the binlog_type column has the same effect as inserting 0 (NBT_DEFAULT); the server default is used.

    在binlog_type列中插入null与插入0具有相同的效果(nbt_default);使用服务器默认值。

  3. Create the test.t1 table:

    创建test.t1表:

    CREATE TABLE test.t1 (
        columns
        mycol INT UNSIGNED,
        columns
    ) ENGINE=NDB;
    

    Now, when updates are done on this table, conflict resolution is applied, and the version of the row having the greatest value for mycol is written to the slave.

    现在,当对该表进行更新时,将应用冲突解决方案,并且将对mycol具有最大值的行的版本写入从行。

Note

Other binlog_type options—such as NBT_UPDATED_ONLY_USE_UPDATE should be used to control logging on the master using the ndb_replication table rather than by using command-line options.

其他binlog_类型选项(例如nbt_updated_only_use_update)应用于使用ndb_replication表而不是使用命令行选项控制主机上的日志记录。

NDB$OLD() example.  Suppose an NDB table such as the one defined here is being replicated, and you wish to enable same timestamp wins conflict resolution for updates to this table:

ndb$old()示例。假设一个ndb表(如此处定义的表)正在被复制,并且您希望对此表的更新启用“相同时间戳赢”冲突解决:

CREATE TABLE test.t2  (
    a INT UNSIGNED NOT NULL,
    b CHAR(25) NOT NULL,
    columns,
    mycol INT UNSIGNED NOT NULL,
    columns,
    PRIMARY KEY pk (a, b)
)   ENGINE=NDB;

The following steps are required, in the order shown:

需要按所示顺序执行以下步骤:

  1. First—and prior to creating test.t2—you must insert a row into the mysql.ndb_replication table, as shown here:

    首先,在创建test.t2之前,必须在mysql.ndb_replication表中插入一行,如下所示:

    INSERT INTO mysql.ndb_replication
        VALUES ('test', 't2', 0, NULL, 'NDB$OLD(mycol)');
    

    Possible values for the binlog_type column are shown earlier in this section. The value 'NDB$OLD(mycol)' should be inserted into the conflict_fn column.

    本节前面显示了binlog_type列的可能值。应将值“ndb$old(mycol)”插入“冲突”列。

  2. Create an appropriate exceptions table for test.t2. The table creation statement shown here includes all required columns; any additional columns must be declared following these columns, and before the definition of the table's primary key.

    为test.t2创建适当的异常表。此处显示的表创建语句包括所有必需的列;必须在这些列之后和表主键定义之前声明任何其他列。

    CREATE TABLE test.t2$EX  (
        server_id INT UNSIGNED,
        master_server_id INT UNSIGNED,
        master_epoch BIGINT UNSIGNED,
        count INT UNSIGNED,
        a INT UNSIGNED NOT NULL,
        b CHAR(25) NOT NULL,
    
        [additional_columns,]
    
        PRIMARY KEY(server_id, master_server_id, master_epoch, count)
    )   ENGINE=NDB;
    

    We can include additional columns for information about the type, cause, and originating transaction ID for a given conflict. We are also not required to supply matching columns for all primary key columns in the original table. This means you can create the exceptions table like this:

    我们可以包含其他列,以获取有关给定冲突的类型、原因和原始事务ID的信息。我们也不需要为原始表中的所有主键列提供匹配的列。这意味着您可以这样创建异常表:

    CREATE TABLE test.t2$EX  (
        NDB$server_id INT UNSIGNED,
        NDB$master_server_id INT UNSIGNED,
        NDB$master_epoch BIGINT UNSIGNED,
        NDB$count INT UNSIGNED,
        a INT UNSIGNED NOT NULL,
    
        NDB$OP_TYPE ENUM('WRITE_ROW','UPDATE_ROW', 'DELETE_ROW',
          'REFRESH_ROW', 'READ_ROW') NOT NULL,
        NDB$CFT_CAUSE ENUM('ROW_DOES_NOT_EXIST', 'ROW_ALREADY_EXISTS',
          'DATA_IN_CONFLICT', 'TRANS_IN_CONFLICT') NOT NULL,
        NDB$ORIG_TRANSID BIGINT UNSIGNED NOT NULL,
    
        [additional_columns,]
    
        PRIMARY KEY(NDB$server_id, NDB$master_server_id, NDB$master_epoch, NDB$count)
    )   ENGINE=NDB;
    
    Note

    The NDB$ prefix is required for the four required columns since we included at least one of the columns NDB$OP_TYPE, NDB$CFT_CAUSE, or NDB$ORIG_TRANSID in the table definition.

    四个必需列需要ndb$前缀,因为表定义中至少包含了ndb$op_type、ndb$cft_cause或ndb$orig_transid列之一。

  3. Create the table test.t2 as shown previously.

    如前所示创建表test.t2。

These steps must be followed for every table for which you wish to perform conflict resolution using NDB$OLD(). For each such table, there must be a corresponding row in mysql.ndb_replication, and there must be an exceptions table in the same database as the table being replicated.

对于要使用ndb$old()执行冲突解决的每个表,必须遵循以下步骤。对于每一个这样的表,mysql.ndb_replication中必须有一个对应的行,并且在与要复制的表相同的数据库中必须有一个异常表。

Read conflict detection and resolution.  NDB Cluster also supports tracking of read operations, which makes it possible in circular replication setups to manage conflicts between reads of a given row in one cluster and updates or deletes of the same row in another. This example uses employee and department tables to model a scenario in which an employee is moved from one department to another on the master cluster (which we refer to hereafter as cluster A) while the slave cluster (hereafter B) updates the employee count of the employee's former department in an interleaved transaction.

读取冲突检测和解决。ndb集群还支持对读取操作的跟踪,这使得在循环复制设置中可以管理一个集群中给定行的读取与另一个集群中相同行的更新或删除之间的冲突。本例使用employee和department表来建模这样一个场景:在主集群(我们在下文中称为集群a)上,员工从一个部门移动到另一个部门,而从集群(下文b)在交叉事务中更新员工以前部门的员工数。

The data tables have been created using the following SQL statements:

数据表是使用以下SQL语句创建的:

# Employee table
CREATE TABLE employee (
    id INT PRIMARY KEY,
    name VARCHAR(2000),
    dept INT NOT NULL
)   ENGINE=NDB;

# Department table
CREATE TABLE department (
    id INT PRIMARY KEY,
    name VARCHAR(2000),
    members INT
)   ENGINE=NDB;

The contents of the two tables include the rows shown in the (partial) output of the following SELECT statements:

这两个表的内容包括以下select语句(部分)输出中显示的行:

mysql> SELECT id, name, dept FROM employee;
+---------------+------+
| id   | name   | dept |
+------+--------+------+
...
| 998  |  Mike  | 3    |
| 999  |  Joe   | 3    |
| 1000 |  Mary  | 3    |
...
+------+--------+------+

mysql> SELECT id, name, members FROM department;
+-----+-------------+---------+
| id  | name        | members |
+-----+-------------+---------+
...
| 3   | Old project | 24      |
...
+-----+-------------+---------+

We assume that we are already using an exceptions table that includes the four required columns (and these are used for this table's primary key), the optional columns for operation type and cause, and the original table's primary key column, created using the SQL statement shown here:

我们假设已经在使用一个异常表,该表包含四个必需的列(这些列用于此表的主键)、操作类型和原因的可选列以及使用下面所示的SQL语句创建的原始表的主键列:

CREATE TABLE employee$EX  (
    NDB$server_id INT UNSIGNED,
    NDB$master_server_id INT UNSIGNED,
    NDB$master_epoch BIGINT UNSIGNED,
    NDB$count INT UNSIGNED,

    NDB$OP_TYPE ENUM( 'WRITE_ROW','UPDATE_ROW', 'DELETE_ROW',
                      'REFRESH_ROW','READ_ROW') NOT NULL,
    NDB$CFT_CAUSE ENUM( 'ROW_DOES_NOT_EXIST',
                        'ROW_ALREADY_EXISTS',
                        'DATA_IN_CONFLICT',
                        'TRANS_IN_CONFLICT') NOT NULL,

    id INT NOT NULL,

    PRIMARY KEY(NDB$server_id, NDB$master_server_id, NDB$master_epoch, NDB$count)
)   ENGINE=NDB;

Suppose there occur the two simultaneous transactions on the two clusters. On cluster A, we create a new department, then move employee number 999 into that department, using the following SQL statements:

假设两个集群上同时发生两个事务。在集群A上,我们创建一个新部门,然后使用以下SQL语句将员工编号999移动到该部门:

BEGIN;
  INSERT INTO department VALUES (4, "New project", 1);
  UPDATE employee SET dept = 4 WHERE id = 999;
COMMIT;

At the same time, on cluster B, another transaction reads from employee, as shown here:

同时,在集群B上,另一个事务从Employee读取,如下所示:

BEGIN;
  SELECT name FROM employee WHERE id = 999;
  UPDATE department SET members = members - 1  WHERE id = 3;
commit;

The conflicting transactions are not normally detected by the conflict resolution mechanism, since the conflict is between a read (SELECT) and an update operation. You can circumvent this issue by executing SET ndb_log_exclusive_reads = 1 on the slave cluster. Acquiring exclusive read locks in this way causes any rows read on the master to be flagged as needing conflict resolution on the slave cluster. If we enable exclusive reads in this way prior to the logging of these transactions, the read on cluster B is tracked and sent to cluster A for resolution; the conflict on the employee row will be detected and the transaction on cluster B is aborted.

冲突解决机制通常不会检测到冲突事务,因为冲突发生在读取(选择)和更新操作之间。您可以通过在从集群上执行set ndb_log_exclusive_reads=1来规避此问题。以这种方式获取独占读锁会导致主集群上读取的任何行被标记为需要在从集群上解决冲突。如果在记录这些事务之前以这种方式启用独占读取,则会跟踪群集B上的读取并将其发送到群集A以进行解决;将检测到员工行上的冲突,并中止群集B上的事务。

The conflict is registered in the exceptions table (on cluster A) as a READ_ROW operation (see Conflict resolution exceptions table, for a description of operation types), as shown here:

冲突在异常表(群集A上)中注册为读取行操作(有关操作类型的说明,请参阅冲突解决异常表),如下所示:

mysql> SELECT id, NDB$OP_TYPE, NDB$CFT_CAUSE FROM employee$EX;
+-------+-------------+-------------------+
| id    | NDB$OP_TYPE | NDB$CFT_CAUSE     |
+-------+-------------+-------------------+
...
| 999   | READ_ROW    | TRANS_IN_CONFLICT |
+-------+-------------+-------------------+

Any existing rows found in the read operation are flagged. This means that multiple rows resulting from the same conflict may be logged in the exception table, as shown by examining the effects a conflict between an update on cluster A and a read of multiple rows on cluster B from the same table in simultaneous transactions. The transaction executed on cluster A is shown here:

在读取操作中找到的任何现有行都被标记。这意味着可以在异常表中记录由同一冲突产生的多行,如在同时事务中检查对集群a的更新和对集群b的多行从同一表中读取之间的冲突的影响所示。在集群A上执行的事务如下所示:

BEGIN;
  INSERT INTO department VALUES (4, "New project", 0);
  UPDATE employee SET dept = 4 WHERE dept = 3;
  SELECT COUNT(*) INTO @count FROM employee WHERE dept = 4;
  UPDATE department SET members = @count WHERE id = 4;
COMMIT;

Concurrently a transaction containing the statements shown here runs on cluster B:

同时,包含此处所示语句的事务在群集B上运行:

SET ndb_log_exclusive_reads = 1;  # Must be set if not already enabled
...
BEGIN;
  SELECT COUNT(*) INTO @count FROM employee WHERE dept = 3 FOR UPDATE;
  UPDATE department SET members = @count WHERE id = 3;
COMMIT;

In this case, all three rows matching the WHERE condition in the second transaction's SELECT are read, and are thus flagged in the exceptions table, as shown here:

在这种情况下,将读取与第二个事务的select中的where条件匹配的所有三行,并因此在exceptions表中进行标记,如下所示:

mysql> SELECT id, NDB$OP_TYPE, NDB$CFT_CAUSE FROM employee$EX;
+-------+-------------+-------------------+
| id    | NDB$OP_TYPE | NDB$CFT_CAUSE     |
+-------+-------------+-------------------+
...
| 998   | READ_ROW    | TRANS_IN_CONFLICT |
| 999   | READ_ROW    | TRANS_IN_CONFLICT |
| 1000  | READ_ROW    | TRANS_IN_CONFLICT |
...
+-------+-------------+-------------------+

Read tracking is performed on the basis of existing rows only. A read based on a given condition track conflicts only of any rows that are found and not of any rows that are inserted in an interleaved transaction. This is similar to how exclusive row locking is performed in a single instance of NDB Cluster.

仅基于现有行执行读取跟踪。基于给定条件的读取只跟踪找到的任何行的冲突,而不跟踪插入交错事务中的任何行的冲突。这类似于如何在ndb集群的单个实例中执行排他行锁定。

21.7 NDB Cluster Release Notes

Changes in NDB Cluster releases are documented separately from this reference manual; you can find release notes for the changes in each NDB Cluster 7.5 release at NDB 7.5 Release Notes, and for each NDB Cluster 7.6 release at NDB 7.6 Release Notes.

ndb cluster版本的更改与本参考手册分开记录;您可以在ndb 7.5发行说明中找到每个ndb cluster 7.5版本的更改的发行说明,在ndb 7.6发行说明中找到每个ndb cluster 7.6版本的更改的发行说明。

You can obtain release notes for older versions of NDB Cluster from NDB Cluster Release Notes.

您可以从ndb cluster release notes中获取旧版本ndb cluster的发行说明。